Skip navigation links
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z _ 

A

abort(Throwable) - Method in interface org.apache.spark.shuffle.api.ShuffleMapOutputWriter
Abort all of the writes done by any writers returned by ShuffleMapOutputWriter.getPartitionWriter(int).
abort(WriterCommitMessage[]) - Method in interface org.apache.spark.sql.connector.write.BatchWrite
Aborts this writing job because some data writers are failed and keep failing when retry, or the Spark job fails with some unknown reasons, or BatchWrite.onDataWriterCommit(WriterCommitMessage) fails, or BatchWrite.commit(WriterCommitMessage[]) fails.
abort() - Method in interface org.apache.spark.sql.connector.write.DataWriter
Aborts this writer if it is failed.
abort(long, WriterCommitMessage[]) - Method in interface org.apache.spark.sql.connector.write.streaming.StreamingWrite
Aborts this writing job because some data writers are failed and keep failing when retried, or the Spark job fails with some unknown reasons, or StreamingWrite.commit(long, WriterCommitMessage[]) fails.
abortStagedChanges() - Method in interface org.apache.spark.sql.connector.catalog.StagedTable
Abort the changes that were staged, both in metadata and from temporary outputs of this table's writers.
abs(Column) - Static method in class org.apache.spark.sql.functions
Computes the absolute value of a numeric value.
abs(T) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
 
abs() - Method in class org.apache.spark.sql.types.Decimal
 
abs(T) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
 
abs(double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
 
abs(double) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleIsConflicted
 
abs(float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
 
abs(float) - Method in interface org.apache.spark.sql.types.FloatType.FloatIsConflicted
 
abs(T) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
 
abs(T) - Static method in class org.apache.spark.sql.types.LongExactNumeric
 
abs(T) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
 
absent() - Static method in class org.apache.spark.api.java.Optional
 
AbsoluteError - Class in org.apache.spark.mllib.tree.loss
Class for absolute error loss calculation (for regression).
AbsoluteError() - Constructor for class org.apache.spark.mllib.tree.loss.AbsoluteError
 
AbstractLauncher<T extends AbstractLauncher<T>> - Class in org.apache.spark.launcher
Base class for launcher implementations.
accept(Parsers) - Static method in class org.apache.spark.ml.feature.RFormulaParser
 
accept(ES, Function1<ES, List<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
 
accept(String, PartialFunction<Object, U>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
 
accept(Path) - Method in class org.apache.spark.ml.image.SamplePathFilter
 
acceptIf(Function1<Object, Object>, Function1<Object, String>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
 
acceptMatch(String, PartialFunction<Object, U>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
 
acceptSeq(ES, Function1<ES, Iterable<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
 
AcceptsLatestSeenOffset - Interface in org.apache.spark.sql.connector.read.streaming
Indicates that the source accepts the latest seen offset, which requires streaming execution to provide the latest seen offset when restarting the streaming query from checkpoint.
acceptsType(DataType) - Method in class org.apache.spark.sql.types.ObjectType
 
accessNonExistentAccumulatorError(long) - Static method in class org.apache.spark.errors.SparkCoreErrors
 
accId() - Method in class org.apache.spark.CleanAccum
 
accumCleaned(long) - Method in interface org.apache.spark.CleanerListener
 
AccumulableInfo - Class in org.apache.spark.scheduler
:: DeveloperApi :: Information about an AccumulatorV2 modified during a task or stage.
AccumulableInfo - Class in org.apache.spark.status.api.v1
 
accumulableInfoFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
 
accumulableInfoToJson(AccumulableInfo) - Static method in class org.apache.spark.util.JsonProtocol
 
accumulables() - Method in class org.apache.spark.scheduler.StageInfo
Terminal values of accumulables updated during this stage, including all the user-defined accumulators.
accumulables() - Method in class org.apache.spark.scheduler.TaskInfo
Intermediate updates to accumulables during this task.
accumulablesToJson(Iterable<AccumulableInfo>) - Static method in class org.apache.spark.util.JsonProtocol
 
AccumulatorContext - Class in org.apache.spark.util
An internal class used to track accumulators by Spark itself.
AccumulatorContext() - Constructor for class org.apache.spark.util.AccumulatorContext
 
ACCUMULATORS() - Static method in class org.apache.spark.status.TaskIndexNames
 
accumulatorUpdates() - Method in class org.apache.spark.status.api.v1.StageData
 
accumulatorUpdates() - Method in class org.apache.spark.status.api.v1.TaskData
 
AccumulatorV2<IN,OUT> - Class in org.apache.spark.util
The base class for accumulators, that can accumulate inputs of type IN, and produce output of type OUT.
AccumulatorV2() - Constructor for class org.apache.spark.util.AccumulatorV2
 
accumUpdates() - Method in class org.apache.spark.ExceptionFailure
 
accumUpdates() - Method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
 
accumUpdates() - Method in class org.apache.spark.TaskKilled
 
accuracy() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
Returns accuracy.
accuracy() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
 
accuracy() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
Returns accuracy
acos(Column) - Static method in class org.apache.spark.sql.functions
 
acos(String) - Static method in class org.apache.spark.sql.functions
 
acosh(Column) - Static method in class org.apache.spark.sql.functions
 
acosh(String) - Static method in class org.apache.spark.sql.functions
 
acquire(Seq<String>) - Method in interface org.apache.spark.resource.ResourceAllocator
Acquire a sequence of resource addresses (to a launched task), these addresses must be available.
actionNotAllowedOnTableSincePartitionMetadataNotStoredError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
actionNotAllowedOnTableWithFilesourcePartitionManagementDisabledError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
ActivationFunction - Interface in org.apache.spark.ml.ann
Trait for functions and their derivatives for functional layers
active() - Static method in class org.apache.spark.sql.SparkSession
Returns the currently active SparkSession, otherwise the default one.
active() - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
Returns a list of active queries associated with this SQLContext
active() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
 
ACTIVE() - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
 
activeIterator() - Method in interface org.apache.spark.ml.linalg.Vector
Returns an iterator over all the active elements of this vector.
activeIterator() - Method in interface org.apache.spark.mllib.linalg.Vector
Returns an iterator over all the active elements of this vector.
activeStages() - Method in class org.apache.spark.status.LiveJob
 
activeTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
 
activeTasks() - Method in class org.apache.spark.status.LiveJob
 
activeTasks() - Method in class org.apache.spark.status.LiveStage
 
activeTasksPerExecutor() - Method in class org.apache.spark.status.LiveStage
 
add(Tuple2<Vector, Object>) - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
Add a new training instance to this ExpectationAggregator, update the weights, means and covariances for each distributions, and update the log likelihood.
add(Term) - Static method in class org.apache.spark.ml.feature.Dot
 
add(Term) - Static method in class org.apache.spark.ml.feature.EmptyTerm
 
add(Term) - Method in interface org.apache.spark.ml.feature.Term
Creates a summation term by concatenation of terms.
add(Datum) - Method in interface org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
Add a single data point to this aggregator.
add(double[], MultivariateGaussian[], ExpectationSum, Vector<Object>) - Static method in class org.apache.spark.mllib.clustering.ExpectationSum
 
add(Vector) - Method in class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
Adds a new document.
add(BlockMatrix) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
Adds the given block matrix other to this block matrix: this + other.
add(Vector) - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
Add a new sample to this summarizer, and update the statistical summary.
add(StructField) - Method in class org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new field.
add(String, DataType) - Method in class org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new nullable field with no metadata.
add(String, DataType, boolean) - Method in class org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new field with no metadata.
add(String, DataType, boolean, Metadata) - Method in class org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new field and specifying metadata.
add(String, DataType, boolean, String) - Method in class org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new field and specifying metadata.
add(String, String) - Method in class org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new nullable field with no metadata where the dataType is specified as a String.
add(String, String, boolean) - Method in class org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new field with no metadata where the dataType is specified as a String.
add(String, String, boolean, Metadata) - Method in class org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new field and specifying metadata where the dataType is specified as a String.
add(String, String, boolean, String) - Method in class org.apache.spark.sql.types.StructType
Creates a new StructType by adding a new field and specifying metadata where the dataType is specified as a String.
add(double) - Method in class org.apache.spark.sql.util.NumericHistogram
Adds a new data point to the histogram approximation.
add(T) - Method in class org.apache.spark.sql.util.SQLOpenHashSet
 
add(long, long) - Static method in class org.apache.spark.streaming.util.RawTextHelper
 
add(IN) - Method in class org.apache.spark.util.AccumulatorV2
Takes the inputs and accumulates.
add(T) - Method in class org.apache.spark.util.CollectionAccumulator
 
add(Double) - Method in class org.apache.spark.util.DoubleAccumulator
Adds v to the accumulator, i.e.
add(double) - Method in class org.apache.spark.util.DoubleAccumulator
Adds v to the accumulator, i.e.
add(Long) - Method in class org.apache.spark.util.LongAccumulator
Adds v to the accumulator, i.e.
add(long) - Method in class org.apache.spark.util.LongAccumulator
Adds v to the accumulator, i.e.
add(Object) - Method in class org.apache.spark.util.sketch.CountMinSketch
Increments item's count by one.
add(Object, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
Increments item's count by count.
add_months(Column, int) - Static method in class org.apache.spark.sql.functions
Returns the date that is numMonths after startDate.
add_months(Column, Column) - Static method in class org.apache.spark.sql.functions
Returns the date that is numMonths after startDate.
addAppArgs(String...) - Method in class org.apache.spark.launcher.AbstractLauncher
Adds command line arguments for the application.
addAppArgs(String...) - Method in class org.apache.spark.launcher.SparkLauncher
 
addArchive(String) - Method in class org.apache.spark.SparkContext
:: Experimental :: Add an archive to be downloaded and unpacked with this Spark job on every node.
addBin(double, double, int) - Method in class org.apache.spark.sql.util.NumericHistogram
Set a particular histogram bin with index.
addBinary(byte[]) - Method in class org.apache.spark.util.sketch.CountMinSketch
Increments item's count by one.
addBinary(byte[], long) - Method in class org.apache.spark.util.sketch.CountMinSketch
Increments item's count by count.
addCatalogInCacheTableAsSelectNotAllowedError(String, SqlBaseParser.CacheTableContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
 
addChunk(ShuffleBlockChunkId, RoaringBitmap) - Method in class org.apache.spark.storage.PushBasedFetchHelper
This is executed by the task thread when the iterator.next() is invoked and the iterator processes a response of type ShuffleBlockFetcherIterator.PushMergedLocalMetaFetchResult.
addColumn(String[], DataType) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
Create a TableChange for adding an optional column.
addColumn(String[], DataType, boolean) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
Create a TableChange for adding a column.
addColumn(String[], DataType, boolean, String) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
Create a TableChange for adding a column.
addColumn(String[], DataType, boolean, String, TableChange.ColumnPosition) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
Create a TableChange for adding a column.
addColumnWithV1TableCannotSpecifyNotNullError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
addFile(String) - Method in class org.apache.spark.api.java.JavaSparkContext
Add a file to be downloaded with this Spark job on every node.
addFile(String, boolean) - Method in class org.apache.spark.api.java.JavaSparkContext
Add a file to be downloaded with this Spark job on every node.
addFile(String) - Method in class org.apache.spark.launcher.AbstractLauncher
Adds a file to be submitted with the application.
addFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
 
addFile(String) - Method in class org.apache.spark.SparkContext
Add a file to be downloaded with this Spark job on every node.
addFile(String, boolean) - Method in class org.apache.spark.SparkContext
Add a file to be downloaded with this Spark job on every node.
addFilesWithAbsolutePathUnsupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
addFilter(ServletContextHandler, String, Map<String, String>) - Static method in class org.apache.spark.ui.JettyUtils
 
addGrid(Param<T>, Iterable<T>) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Adds a param with multiple values (overwrites if the input param exists).
addGrid(DoubleParam, double[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Adds a double param with multiple values.
addGrid(IntParam, int[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Adds an int param with multiple values.
addGrid(FloatParam, float[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Adds a float param with multiple values.
addGrid(LongParam, long[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Adds a long param with multiple values.
addGrid(BooleanParam) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Adds a boolean param with true and false.
addJar(String) - Method in class org.apache.spark.api.java.JavaSparkContext
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
addJar(String) - Method in class org.apache.spark.launcher.AbstractLauncher
Adds a jar file to be submitted with the application.
addJar(String) - Method in class org.apache.spark.launcher.SparkLauncher
 
addJar(String) - Method in class org.apache.spark.SparkContext
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
addJarsToClassPath(String, MutableURLClassLoader) - Static method in class org.apache.spark.util.DependencyUtils
 
addJarToClasspath(String, MutableURLClassLoader) - Static method in class org.apache.spark.util.DependencyUtils
 
addListener(SparkAppHandle.Listener) - Method in interface org.apache.spark.launcher.SparkAppHandle
Adds a listener to be notified of changes to the handle's information.
addListener(StreamingQueryListener) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
Register a StreamingQueryListener to receive up-calls for life cycle events of StreamingQuery.
addListener(L) - Method in interface org.apache.spark.util.ListenerBus
Add a listener to listen events.
addLocalConfiguration(String, int, int, int, JobConf) - Static method in class org.apache.spark.rdd.HadoopRDD
Add Hadoop configuration specific to a single partition and attempt.
addLong(long) - Method in class org.apache.spark.util.sketch.CountMinSketch
Increments item's count by one.
addLong(long, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
Increments item's count by count.
addMapOutput(int, MapStatus) - Method in class org.apache.spark.ShuffleStatus
Register a map output.
addMergeResult(int, org.apache.spark.scheduler.MergeStatus) - Method in class org.apache.spark.ShuffleStatus
Register a merge result.
addMetrics(TaskMetrics, TaskMetrics) - Static method in class org.apache.spark.status.LiveEntityHelpers
Add m2 values to m1.
addNaN() - Method in class org.apache.spark.sql.util.SQLOpenHashSet
 
addNewFunctionMismatchedWithFunctionError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
addNull() - Method in class org.apache.spark.sql.util.SQLOpenHashSet
 
addPartition(LiveRDDPartition) - Method in class org.apache.spark.status.RDDPartitionSeq
 
addPartToPGroup(Partition, PartitionGroup) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
 
addPyFile(String) - Method in class org.apache.spark.launcher.AbstractLauncher
Adds a python file / zip / egg to be submitted with the application.
addPyFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
 
addRequest(TaskResourceRequest) - Method in class org.apache.spark.resource.TaskResourceRequests
Add a certain TaskResourceRequest to the request set.
address() - Method in class org.apache.spark.BarrierTaskInfo
 
address() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
 
addresses() - Method in class org.apache.spark.resource.ResourceInformation
 
addresses() - Method in class org.apache.spark.resource.ResourceInformationJson
 
addSchedulable(Schedulable) - Method in interface org.apache.spark.scheduler.Schedulable
 
addShutdownHook(Function0<BoxedUnit>) - Static method in class org.apache.spark.util.ShutdownHookManager
Adds a shutdown hook with default priority.
addShutdownHook(int, Function0<BoxedUnit>) - Static method in class org.apache.spark.util.ShutdownHookManager
Adds a shutdown hook with the given priority.
addSparkArg(String) - Method in class org.apache.spark.launcher.AbstractLauncher
Adds a no-value argument to the Spark invocation.
addSparkArg(String, String) - Method in class org.apache.spark.launcher.AbstractLauncher
Adds an argument with a value to the Spark invocation.
addSparkArg(String) - Method in class org.apache.spark.launcher.SparkLauncher
 
addSparkArg(String, String) - Method in class org.apache.spark.launcher.SparkLauncher
 
addSparkListener(SparkListenerInterface) - Method in class org.apache.spark.SparkContext
:: DeveloperApi :: Register a listener to receive up-calls from events that happen during execution.
addStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Add a StreamingListener object for receiving system events related to streaming.
addStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.StreamingContext
Add a StreamingListener object for receiving system events related to streaming.
addString(String) - Method in class org.apache.spark.util.sketch.CountMinSketch
Increments item's count by one.
addString(String, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
Increments item's count by count.
addTaskCompletionListener(TaskCompletionListener) - Method in class org.apache.spark.BarrierTaskContext
 
addTaskCompletionListener(TaskCompletionListener) - Method in class org.apache.spark.TaskContext
Adds a (Java friendly) listener to be executed on task completion.
addTaskCompletionListener(Function1<TaskContext, U>) - Method in class org.apache.spark.TaskContext
Adds a listener in the form of a Scala closure to be executed on task completion.
addTaskFailureListener(TaskFailureListener) - Method in class org.apache.spark.BarrierTaskContext
 
addTaskFailureListener(TaskFailureListener) - Method in class org.apache.spark.TaskContext
Adds a listener to be executed on task failure.
addTaskFailureListener(Function2<TaskContext, Throwable, BoxedUnit>) - Method in class org.apache.spark.TaskContext
Adds a listener to be executed on task failure.
addTaskResourceRequests(SparkConf, TaskResourceRequests) - Static method in class org.apache.spark.resource.ResourceUtils
 
addTaskSetManager(Schedulable, Properties) - Method in interface org.apache.spark.scheduler.SchedulableBuilder
 
addTime() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
 
addTime() - Method in class org.apache.spark.status.api.v1.ProcessSummary
 
addURL(URL) - Method in class org.apache.spark.util.MutableURLClassLoader
 
AddWebUIFilter(String, Map<String, String>, String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
 
AddWebUIFilter$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter$
 
aesCryptoError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
aesModeUnsupportedError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
after(String) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange.ColumnPosition
 
AFTSurvivalRegression - Class in org.apache.spark.ml.regression
Fit a parametric survival regression model named accelerated failure time (AFT) model (see Accelerated failure time model (Wikipedia)) based on the Weibull distribution of the survival time.
AFTSurvivalRegression(String) - Constructor for class org.apache.spark.ml.regression.AFTSurvivalRegression
 
AFTSurvivalRegression() - Constructor for class org.apache.spark.ml.regression.AFTSurvivalRegression
 
AFTSurvivalRegressionModel - Class in org.apache.spark.ml.regression
Model produced by AFTSurvivalRegression.
AFTSurvivalRegressionParams - Interface in org.apache.spark.ml.regression
Params for accelerated failure time (AFT) regression.
agg(Column, Column...) - Method in class org.apache.spark.sql.Dataset
Aggregates on the entire Dataset without groups.
agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.Dataset
(Scala-specific) Aggregates on the entire Dataset without groups.
agg(Map<String, String>) - Method in class org.apache.spark.sql.Dataset
(Scala-specific) Aggregates on the entire Dataset without groups.
agg(Map<String, String>) - Method in class org.apache.spark.sql.Dataset
(Java-specific) Aggregates on the entire Dataset without groups.
agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.Dataset
Aggregates on the entire Dataset without groups.
agg(TypedColumn<V, U1>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregation, returning a Dataset of tuples for each unique key and the result of computing this aggregation over all elements in the group.
agg(TypedColumn<V, U1>, TypedColumn<V, U2>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>, TypedColumn<V, U7>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>, TypedColumn<V, U7>, TypedColumn<V, U8>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
agg(Column, Column...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
Compute aggregates by specifying a series of aggregate columns.
agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
(Scala-specific) Compute aggregates by specifying the column names and aggregate methods.
agg(Map<String, String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
(Scala-specific) Compute aggregates by specifying a map from column name to aggregate methods.
agg(Map<String, String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
(Java-specific) Compute aggregates by specifying a map from column name to aggregate methods.
agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
Compute aggregates by specifying a series of aggregate columns.
aggregate(U, Function2<U, T, U>, Function2<U, U, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value".
aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value".
aggregate(Column, Column, Function2<Column, Column, Column>, Function1<Column, Column>) - Static method in class org.apache.spark.sql.functions
Applies a binary operator to an initial state and all elements in the array, and reduces this to a single state.
aggregate(Column, Column, Function2<Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
Applies a binary operator to an initial state and all elements in the array, and reduces this to a single state.
aggregateByKey(U, Partitioner, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, int, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, Partitioner, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, int, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Aggregate the values of each key, using given combine functions and a neutral "zero value".
aggregateByKey(U, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Aggregate the values of each key, using given combine functions and a neutral "zero value".
AggregatedDialect - Class in org.apache.spark.sql.jdbc
AggregatedDialect can unify multiple dialects into one virtual Dialect.
AggregatedDialect(List<JdbcDialect>) - Constructor for class org.apache.spark.sql.jdbc.AggregatedDialect
 
aggregateExpressionRequiredForPivotError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
aggregateExpressions() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Aggregation
 
AggregateFunc - Interface in org.apache.spark.sql.connector.expressions.aggregate
Base class of the Aggregate Functions.
AggregateFunction<S extends java.io.Serializable,R> - Interface in org.apache.spark.sql.connector.catalog.functions
Interface for a function that produces a result value by aggregating over multiple input rows.
aggregateInAggregateFilterError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
aggregateMessages(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, ClassTag<A>) - Method in class org.apache.spark.graphx.Graph
Aggregates values from the neighboring edges and vertices of each vertex.
aggregateMessagesWithActiveSet(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, Option<Tuple2<VertexRDD<?>, EdgeDirection>>, ClassTag<A>) - Method in class org.apache.spark.graphx.impl.GraphImpl
 
aggregateTaskMetrics(long[]) - Method in class org.apache.spark.sql.connector.metric.CustomAvgMetric
 
aggregateTaskMetrics(long[]) - Method in interface org.apache.spark.sql.connector.metric.CustomMetric
Given an array of task metric values, returns aggregated final metric value.
aggregateTaskMetrics(long[]) - Method in class org.apache.spark.sql.connector.metric.CustomSumMetric
 
aggregateUsingIndex(RDD<Tuple2<Object, VD2>>, Function2<VD2, VD2, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
 
aggregateUsingIndex(RDD<Tuple2<Object, VD2>>, Function2<VD2, VD2, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.VertexRDD
Aggregates vertices in messages that have the same ids using reduceFunc, returning a VertexRDD co-indexed with this.
AggregatingEdgeContext<VD,ED,A> - Class in org.apache.spark.graphx.impl
 
AggregatingEdgeContext(Function2<A, A, A>, Object, BitSet) - Constructor for class org.apache.spark.graphx.impl.AggregatingEdgeContext
 
Aggregation - Class in org.apache.spark.sql.connector.expressions.aggregate
Aggregation in SQL statement.
Aggregation(AggregateFunc[], Expression[]) - Constructor for class org.apache.spark.sql.connector.expressions.aggregate.Aggregation
 
aggregationDepth() - Method in class org.apache.spark.ml.classification.LinearSVC
 
aggregationDepth() - Method in class org.apache.spark.ml.classification.LinearSVCModel
 
aggregationDepth() - Method in class org.apache.spark.ml.classification.LogisticRegression
 
aggregationDepth() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
 
aggregationDepth() - Method in class org.apache.spark.ml.clustering.GaussianMixture
 
aggregationDepth() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
 
aggregationDepth() - Method in interface org.apache.spark.ml.param.shared.HasAggregationDepth
Param for suggested depth for treeAggregate (&gt;= 2).
aggregationDepth() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
 
aggregationDepth() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
aggregationDepth() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
 
aggregationDepth() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
aggregationDepth() - Method in class org.apache.spark.ml.regression.LinearRegression
 
aggregationDepth() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
 
aggregationFunctionAppliedOnNonNumericColumnError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
aggregationFunctionAppliedOnNonNumericColumnError(String, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
Aggregator<K,V,C> - Class in org.apache.spark
:: DeveloperApi :: A set of functions used to aggregate data.
Aggregator(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>) - Constructor for class org.apache.spark.Aggregator
 
aggregator() - Method in class org.apache.spark.ShuffleDependency
 
Aggregator<IN,BUF,OUT> - Class in org.apache.spark.sql.expressions
A base class for user-defined aggregations, which can be used in Dataset operations to take all of the elements of a group and reduce them to a single value.
Aggregator() - Constructor for class org.apache.spark.sql.expressions.Aggregator
 
aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
 
aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
 
aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
 
aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
 
aic() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
 
Algo - Class in org.apache.spark.mllib.tree.configuration
Enum to select the algorithm for the decision tree
Algo() - Constructor for class org.apache.spark.mllib.tree.configuration.Algo
 
algo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
algo() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
 
algo() - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
 
algo() - Method in class org.apache.spark.mllib.tree.model.RandomForestModel
 
algorithm() - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
 
alias(String) - Method in class org.apache.spark.sql.Column
Gives the column an alias.
alias(String) - Method in class org.apache.spark.sql.Dataset
Returns a new Dataset with an alias set.
alias(Symbol) - Method in class org.apache.spark.sql.Dataset
(Scala-specific) Returns a new Dataset with an alias set.
aliasesNumberNotMatchUDTFOutputError(int, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
aliasNumberNotMatchColumnNumberError(int, int, TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
All - Static variable in class org.apache.spark.graphx.TripletFields
Expose all the fields (source, edge, and destination).
ALL_GATHER() - Static method in class org.apache.spark.RequestMethod
 
allAvailable() - Static method in interface org.apache.spark.sql.connector.read.streaming.ReadLimit
 
allGather(String) - Method in class org.apache.spark.BarrierTaskContext
:: Experimental :: Blocks until all tasks in the same stage have reached this routine.
AllJobsCancelled - Class in org.apache.spark.scheduler
 
AllJobsCancelled() - Constructor for class org.apache.spark.scheduler.AllJobsCancelled
 
allocate(int) - Method in class org.apache.spark.sql.util.NumericHistogram
Sets the number of histogram bins to use for approximating data.
allocator() - Method in class org.apache.spark.storage.memory.SerializedValuesHolder
 
AllReceiverIds - Class in org.apache.spark.streaming.scheduler
A message used by ReceiverTracker to ask all receiver's ids still stored in ReceiverTrackerEndpoint.
AllReceiverIds() - Constructor for class org.apache.spark.streaming.scheduler.AllReceiverIds
 
allRemovalsTimeMs() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
 
allSources() - Static method in class org.apache.spark.metrics.source.StaticSources
The set of all static sources.
allSupportedExecutorResources() - Static method in class org.apache.spark.resource.ResourceProfile
Return all supported Spark built-in executor resources, custom resources like GPUs/FPGAs are excluded.
allUpdatesTimeMs() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
 
alpha() - Method in class org.apache.spark.ml.recommendation.ALS
 
alpha() - Method in interface org.apache.spark.ml.recommendation.ALSParams
Param for the alpha parameter in the implicit preference formulation (nonnegative).
alpha() - Method in class org.apache.spark.mllib.random.WeibullGenerator
 
ALS - Class in org.apache.spark.ml.recommendation
Alternating Least Squares (ALS) matrix factorization.
ALS(String) - Constructor for class org.apache.spark.ml.recommendation.ALS
 
ALS() - Constructor for class org.apache.spark.ml.recommendation.ALS
 
ALS - Class in org.apache.spark.mllib.recommendation
Alternating Least Squares matrix factorization.
ALS() - Constructor for class org.apache.spark.mllib.recommendation.ALS
Constructs an ALS instance with default parameters: {numBlocks: -1, rank: 10, iterations: 10, lambda: 0.01, implicitPrefs: false, alpha: 1.0}.
ALS.InBlock$ - Class in org.apache.spark.ml.recommendation
 
ALS.LeastSquaresNESolver - Interface in org.apache.spark.ml.recommendation
Trait for least squares solvers applied to the normal equation.
ALS.Rating<ID> - Class in org.apache.spark.ml.recommendation
Rating class for better code readability.
ALS.Rating$ - Class in org.apache.spark.ml.recommendation
 
ALS.RatingBlock$ - Class in org.apache.spark.ml.recommendation
 
ALSModel - Class in org.apache.spark.ml.recommendation
Model fitted by ALS.
ALSModelParams - Interface in org.apache.spark.ml.recommendation
Common params for ALS and ALSModel.
ALSParams - Interface in org.apache.spark.ml.recommendation
Common params for ALS.
alterAddColNotSupportDatasourceTableError(Object, TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
alterAddColNotSupportViewError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
alterColumnCannotFindColumnInV1TableError(String, org.apache.spark.sql.connector.catalog.V1Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
alterColumnWithV1TableCannotSpecifyNotNullError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
alterDatabaseLocationUnsupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
alterNamespace(String[], NamespaceChange...) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
alterNamespace(String[], NamespaceChange...) - Method in interface org.apache.spark.sql.connector.catalog.SupportsNamespaces
Apply a set of metadata changes to a namespace in the catalog.
alterTable(Identifier, TableChange...) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
 
alterTable(Identifier, TableChange...) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
Apply a set of changes to a table in the catalog.
alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
 
alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
 
alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.H2Dialect
 
alterTable(String, Seq<TableChange>, int) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
Alter an existing table.
alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
 
alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
 
alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
 
alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
 
alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
 
alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
 
alterTableChangeColumnNotSupportedForColumnTypeError(StructField, StructField) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
alterTableRecoverPartitionsNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
alterTableSerDePropertiesNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
alterTableSetSerdeForSpecificPartitionNotSupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
alterTableSetSerdeNotSupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
alterTableWithDropPartitionAndPurgeUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
alterV2TableSetLocationWithPartitionNotSupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
AlwaysFalse - Class in org.apache.spark.sql.connector.expressions.filter
A predicate that always evaluates to false.
AlwaysFalse() - Constructor for class org.apache.spark.sql.connector.expressions.filter.AlwaysFalse
 
AlwaysFalse - Class in org.apache.spark.sql.sources
A filter that always evaluates to false.
AlwaysFalse() - Constructor for class org.apache.spark.sql.sources.AlwaysFalse
 
AlwaysTrue - Class in org.apache.spark.sql.connector.expressions.filter
A predicate that always evaluates to true.
AlwaysTrue() - Constructor for class org.apache.spark.sql.connector.expressions.filter.AlwaysTrue
 
AlwaysTrue - Class in org.apache.spark.sql.sources
A filter that always evaluates to true.
AlwaysTrue() - Constructor for class org.apache.spark.sql.sources.AlwaysTrue
 
am() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
 
ambiguousAttributesInSelfJoinError(Seq<AttributeReference>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
ambiguousFieldNameError(Seq<String>, int, Origin) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
ambiguousReferenceToFieldsError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
ambiguousRelationAliasNameInNestedCTEError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
amount() - Method in class org.apache.spark.resource.ExecutorResourceRequest
 
amount() - Method in class org.apache.spark.resource.ResourceRequest
 
AMOUNT() - Static method in class org.apache.spark.resource.ResourceUtils
 
amount() - Method in class org.apache.spark.resource.TaskResourceRequest
 
AnalysisException - Exception in org.apache.spark.sql
Thrown when a query fails to analyze, usually because the query itself is invalid.
AnalysisException(String, String[], Option<Throwable>) - Constructor for exception org.apache.spark.sql.AnalysisException
 
AnalysisException(String, String[]) - Constructor for exception org.apache.spark.sql.AnalysisException
 
AnalysisException(String, String[], Origin) - Constructor for exception org.apache.spark.sql.AnalysisException
 
analyzeTableNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
analyzeTableNotSupportedOnViewsError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
analyzingColumnStatisticsNotSupportedForColumnTypeError(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
and(Column) - Method in class org.apache.spark.sql.Column
Boolean AND.
And - Class in org.apache.spark.sql.connector.expressions.filter
A predicate that evaluates to true iff both left and right evaluate to true.
And(Predicate, Predicate) - Constructor for class org.apache.spark.sql.connector.expressions.filter.And
 
And - Class in org.apache.spark.sql.sources
A filter that evaluates to true iff both left or right evaluate to true.
And(Filter, Filter) - Constructor for class org.apache.spark.sql.sources.And
 
ANOVATest - Class in org.apache.spark.ml.stat
ANOVA Test for continuous data.
ANOVATest() - Constructor for class org.apache.spark.ml.stat.ANOVATest
 
ansiDateTimeError(DateTimeException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
ansiDateTimeParseError(DateTimeParseException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
ansiIllegalArgumentError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
ansiIllegalArgumentError(IllegalArgumentException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
ansiParseError(ParseException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
antecedent() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
 
ANY() - Static method in class org.apache.spark.scheduler.TaskLocality
 
AnyDataType - Class in org.apache.spark.sql.types
An AbstractDataType that matches any concrete data types.
AnyDataType() - Constructor for class org.apache.spark.sql.types.AnyDataType
 
anyNull() - Method in interface org.apache.spark.sql.Row
Returns true if there are any NULL values in this row.
anyNull() - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
 
anyNull() - Method in class org.apache.spark.sql.vectorized.ColumnarRow
 
AnyTimestampType - Class in org.apache.spark.sql.types
 
AnyTimestampType() - Constructor for class org.apache.spark.sql.types.AnyTimestampType
 
ApiHelper - Class in org.apache.spark.ui.jobs
 
ApiHelper() - Constructor for class org.apache.spark.ui.jobs.ApiHelper
 
ApiRequestContext - Interface in org.apache.spark.status.api.v1
 
appAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
 
append() - Method in class org.apache.spark.sql.DataFrameWriterV2
Append the contents of the data frame to the output table.
Append() - Static method in class org.apache.spark.sql.streaming.OutputMode
OutputMode in which only the new rows in the streaming DataFrame/Dataset will be written to the sink.
appendBias(Vector) - Static method in class org.apache.spark.mllib.util.MLUtils
Returns a new vector with 1.0 (bias) appended to the input vector.
appendColumn(StructType, String, DataType, boolean) - Static method in class org.apache.spark.ml.util.SchemaUtils
Appends a new column to the input schema.
appendColumn(StructType, StructField) - Static method in class org.apache.spark.ml.util.SchemaUtils
Appends a new column to the input schema.
AppHistoryServerPlugin - Interface in org.apache.spark.status
An interface for creating history listeners(to replay event logs) defined in other modules like SQL, and setup the UI of the plugin to rebuild the history UI.
appId() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
 
appId() - Method in interface org.apache.spark.status.api.v1.BaseAppResource
 
appId() - Method in class org.apache.spark.storage.ShuffleMergedDataBlockId
 
appId() - Method in class org.apache.spark.storage.ShuffleMergedIndexBlockId
 
appId() - Method in class org.apache.spark.storage.ShuffleMergedMetaBlockId
 
APPLICATION_EXECUTOR_LIMIT() - Static method in class org.apache.spark.ui.ToolTips
 
APPLICATION_MASTER() - Static method in class org.apache.spark.metrics.MetricsSystemInstances
 
applicationAttemptId() - Method in interface org.apache.spark.scheduler.SchedulerBackend
Get the attempt ID for this run, if the cluster manager supports multiple attempts.
applicationAttemptId() - Method in interface org.apache.spark.scheduler.TaskScheduler
Get an application's attempt ID associated with the job.
applicationAttemptId() - Method in class org.apache.spark.SparkContext
 
ApplicationAttemptInfo - Class in org.apache.spark.status.api.v1
 
applicationEndFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
 
applicationEndToJson(SparkListenerApplicationEnd) - Static method in class org.apache.spark.util.JsonProtocol
 
ApplicationEnvironmentInfo - Class in org.apache.spark.status.api.v1
 
applicationId() - Method in interface org.apache.spark.scheduler.SchedulerBackend
Get an application ID associated with the job.
applicationId() - Method in interface org.apache.spark.scheduler.TaskScheduler
Get an application ID associated with the job.
applicationId() - Method in class org.apache.spark.SparkContext
A unique identifier for the Spark application.
ApplicationInfo - Class in org.apache.spark.status.api.v1
 
APPLICATIONS() - Static method in class org.apache.spark.metrics.MetricsSystemInstances
 
applicationStartFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
 
applicationStartToJson(SparkListenerApplicationStart) - Static method in class org.apache.spark.util.JsonProtocol
 
ApplicationStatus - Enum in org.apache.spark.status.api.v1
 
apply(T1) - Static method in class org.apache.spark.CleanAccum
 
apply(T1) - Static method in class org.apache.spark.CleanBroadcast
 
apply(T1) - Static method in class org.apache.spark.CleanCheckpoint
 
apply(T1) - Static method in class org.apache.spark.CleanRDD
 
apply(T1) - Static method in class org.apache.spark.CleanShuffle
 
apply(T1) - Static method in class org.apache.spark.CleanSparkListener
 
apply(T1, T2) - Static method in class org.apache.spark.ContextBarrierId
 
apply(T1, T2, T3, T4, T5, T6, T7, T8) - Static method in class org.apache.spark.ExceptionFailure
 
apply(T1, T2, T3) - Static method in class org.apache.spark.ExecutorLostFailure
 
apply(T1) - Static method in class org.apache.spark.ExecutorRegistered
 
apply(T1) - Static method in class org.apache.spark.ExecutorRemoved
 
apply(T1, T2, T3, T4, T5, T6) - Static method in class org.apache.spark.FetchFailed
 
apply(RDD<Tuple2<Object, VD>>, RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.Graph
Construct a graph from a collection of vertices and edges with attributes.
apply(RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
Create a graph from edges, setting referenced vertices to defaultVertexAttr.
apply(RDD<Tuple2<Object, VD>>, RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
Create a graph from vertices and edges, setting missing vertices to defaultVertexAttr.
apply(VertexRDD<VD>, EdgeRDD<ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
Create a graph from a VertexRDD and an EdgeRDD with arbitrary replicated vertices.
apply(Graph<VD, ED>, A, int, EdgeDirection, Function3<Object, VD, A, VD>, Function1<EdgeTriplet<VD, ED>, Iterator<Tuple2<Object, A>>>, Function2<A, A, A>, ClassTag<VD>, ClassTag<ED>, ClassTag<A>) - Static method in class org.apache.spark.graphx.Pregel
Execute a Pregel-like iterative vertex-parallel abstraction.
apply(RDD<Tuple2<Object, VD>>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
Constructs a standalone VertexRDD (one that is not set up for efficient joins with an EdgeRDD) from an RDD of vertex-attribute pairs.
apply(RDD<Tuple2<Object, VD>>, EdgeRDD<?>, VD, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
Constructs a VertexRDD from an RDD of vertex-attribute pairs.
apply(RDD<Tuple2<Object, VD>>, EdgeRDD<?>, VD, Function2<VD, VD, VD>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
Constructs a VertexRDD from an RDD of vertex-attribute pairs.
apply(DenseMatrix<Object>, DenseMatrix<Object>, Function1<Object, Object>) - Static method in class org.apache.spark.ml.ann.ApplyInPlace
 
apply(DenseMatrix<Object>, DenseMatrix<Object>, DenseMatrix<Object>, Function2<Object, Object, Object>) - Static method in class org.apache.spark.ml.ann.ApplyInPlace
 
apply(String) - Method in class org.apache.spark.ml.attribute.AttributeGroup
Gets an attribute by its name.
apply(int) - Method in class org.apache.spark.ml.attribute.AttributeGroup
Gets an attribute by its index.
apply(T1, T2) - Static method in class org.apache.spark.ml.clustering.ClusterData
 
apply(T1, T2) - Static method in class org.apache.spark.ml.feature.LabeledPoint
 
apply(int, int) - Method in class org.apache.spark.ml.linalg.DenseMatrix
 
apply(int) - Method in class org.apache.spark.ml.linalg.DenseVector
 
apply(int, int) - Method in interface org.apache.spark.ml.linalg.Matrix
Gets the (i, j)-th element.
apply(int, int) - Method in class org.apache.spark.ml.linalg.SparseMatrix
 
apply(int) - Method in class org.apache.spark.ml.linalg.SparseVector
 
apply(int) - Method in interface org.apache.spark.ml.linalg.Vector
Gets the value of the ith element.
apply(Param<T>) - Method in class org.apache.spark.ml.param.ParamMap
Gets the value of the input param or its default value if it does not exist.
apply(GeneralizedLinearRegressionBase) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink$
Constructs the FamilyAndLink object from a parameter map
apply(T1) - Static method in class org.apache.spark.ml.SaveInstanceEnd
 
apply(T1) - Static method in class org.apache.spark.ml.SaveInstanceStart
 
apply() - Static method in class org.apache.spark.ml.TransformEnd
 
apply() - Static method in class org.apache.spark.ml.TransformStart
 
apply(Split) - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData$
 
apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
 
apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
 
apply(T1, T2, T3, T4) - Static method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
 
apply(Row) - Method in class org.apache.spark.mllib.clustering.KMeansModel.Cluster$
 
apply(BinaryConfusionMatrix) - Method in interface org.apache.spark.mllib.evaluation.binary.BinaryClassificationMetricComputer
 
apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.FalsePositiveRate
 
apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.Precision
 
apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.Recall
 
apply(T1) - Static method in class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data
 
apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.mllib.feature.VocabWord
 
apply(int, int) - Method in class org.apache.spark.mllib.linalg.DenseMatrix
 
apply(int) - Method in class org.apache.spark.mllib.linalg.DenseVector
 
apply(T1, T2) - Static method in class org.apache.spark.mllib.linalg.distributed.IndexedRow
 
apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
 
apply(int, int) - Method in interface org.apache.spark.mllib.linalg.Matrix
Gets the (i, j)-th element.
apply(int, int) - Method in class org.apache.spark.mllib.linalg.SparseMatrix
 
apply(int) - Method in class org.apache.spark.mllib.linalg.SparseVector
 
apply(int) - Method in interface org.apache.spark.mllib.linalg.Vector
Gets the value of the ith element.
apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.recommendation.Rating
 
apply(T1, T2) - Static method in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data
 
apply(T1, T2) - Static method in class org.apache.spark.mllib.stat.test.BinarySample
 
apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.Algo
 
apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
 
apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
 
apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
 
apply(int, Node) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData$
 
apply(Row) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData$
 
apply(int, Node) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
 
apply(Row) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
 
apply(Predict) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData$
 
apply(Row) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData$
 
apply(Predict) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
 
apply(Row) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
 
apply(Split) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData$
 
apply(Row) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData$
 
apply(Split) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
 
apply(Row) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
 
apply(int, Predict, double, boolean) - Static method in class org.apache.spark.mllib.tree.model.Node
Construct a node with nodeIndex, predict, impurity and isLeaf parameters.
apply(T1, T2, T3, T4) - Static method in class org.apache.spark.mllib.tree.model.Split
 
apply(int) - Static method in class org.apache.spark.rdd.CheckpointState
 
apply(int) - Static method in class org.apache.spark.rdd.DeterministicLevel
 
apply(int) - Static method in class org.apache.spark.RequestMethod
 
apply(T1, T2) - Static method in class org.apache.spark.resource.ResourceInformationJson
 
apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.scheduler.AccumulableInfo
 
apply(T1, T2, T3, T4) - Static method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
 
apply(String, long, Enumeration.Value, ByteBuffer, Map<String, ResourceInformation>) - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate$
Alternate factory method that takes a ByteBuffer directly for the data field
apply(T1, T2) - Static method in class org.apache.spark.scheduler.ExcludedExecutor
 
apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.local.KillTask
 
apply() - Static method in class org.apache.spark.scheduler.local.ReviveOffers
 
apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.local.StatusUpdate
 
apply() - Static method in class org.apache.spark.scheduler.local.StopExecutor
 
apply(long, TaskMetrics) - Static method in class org.apache.spark.scheduler.RuntimePercentage
 
apply(int) - Static method in class org.apache.spark.scheduler.SchedulingMode
 
apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerApplicationEnd
 
apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.scheduler.SparkListenerApplicationStart
 
apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
 
apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
 
apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerBlockUpdated
 
apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
 
apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
 
apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
Deprecated.
 
apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
Deprecated.
 
apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorExcluded
 
apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorExcludedForStage
 
apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
 
apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
 
apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
Deprecated.
 
apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnexcluded
 
apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerJobEnd
 
apply(T1, T2, T3, T4) - Static method in class org.apache.spark.scheduler.SparkListenerJobStart
 
apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerLogStart
 
apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerMiscellaneousProcessAdded
 
apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
Deprecated.
 
apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
Deprecated.
 
apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerNodeExcluded
 
apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerNodeExcludedForStage
 
apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
Deprecated.
 
apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnexcluded
 
apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerResourceProfileAdded
 
apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
 
apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerStageCompleted
 
apply(T1, T2, T3, T4) - Static method in class org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
 
apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerStageSubmitted
 
apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.scheduler.SparkListenerTaskEnd
 
apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerTaskGettingResult
 
apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerTaskStart
 
apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerUnpersistRDD
 
apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetAdded
 
apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetRemoved
 
apply(int) - Static method in class org.apache.spark.scheduler.TaskLocality
 
apply(Object) - Method in class org.apache.spark.sql.Column
Extracts a value or values from a complex type.
apply(String, Expression...) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
Create a logical transform for applying a named transform.
apply(String, Seq<Expression>) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
 
apply(String) - Method in class org.apache.spark.sql.Dataset
Selects column based on the column name and returns it as a Column.
apply(Column...) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
Deprecated.
Creates a Column for this UDAF using given Columns as input arguments.
apply(Seq<Column>) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
Deprecated.
Creates a Column for this UDAF using given Columns as input arguments.
apply(Column...) - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
Returns an expression that invokes the UDF, using the given arguments.
apply(Seq<Column>) - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
Returns an expression that invokes the UDF, using the given arguments.
apply(T1, T2) - Static method in class org.apache.spark.sql.jdbc.JdbcType
 
apply() - Static method in class org.apache.spark.sql.Observation
Observation constructor for creating an anonymous observation.
apply(String) - Static method in class org.apache.spark.sql.Observation
Observation constructor for creating a named observation.
apply(Dataset<Row>, Seq<Expression>, RelationalGroupedDataset.GroupType) - Static method in class org.apache.spark.sql.RelationalGroupedDataset
 
apply(int) - Method in interface org.apache.spark.sql.Row
Returns the value at position i.
apply(T1, T2) - Static method in class org.apache.spark.sql.sources.And
 
apply(T1, T2) - Static method in class org.apache.spark.sql.sources.EqualNullSafe
 
apply(T1, T2) - Static method in class org.apache.spark.sql.sources.EqualTo
 
apply(T1, T2) - Static method in class org.apache.spark.sql.sources.GreaterThan
 
apply(T1, T2) - Static method in class org.apache.spark.sql.sources.GreaterThanOrEqual
 
apply(T1, T2) - Static method in class org.apache.spark.sql.sources.In
 
apply(T1) - Static method in class org.apache.spark.sql.sources.IsNotNull
 
apply(T1) - Static method in class org.apache.spark.sql.sources.IsNull
 
apply(T1, T2) - Static method in class org.apache.spark.sql.sources.LessThan
 
apply(T1, T2) - Static method in class org.apache.spark.sql.sources.LessThanOrEqual
 
apply(T1) - Static method in class org.apache.spark.sql.sources.Not
 
apply(T1, T2) - Static method in class org.apache.spark.sql.sources.Or
 
apply(T1, T2) - Static method in class org.apache.spark.sql.sources.StringContains
 
apply(T1, T2) - Static method in class org.apache.spark.sql.sources.StringEndsWith
 
apply(T1, T2) - Static method in class org.apache.spark.sql.sources.StringStartsWith
 
apply(String, Option<Object>) - Static method in class org.apache.spark.sql.streaming.SinkProgress
 
apply(DataType) - Static method in class org.apache.spark.sql.types.ArrayType
Construct a ArrayType object with the given element type.
apply(T1) - Static method in class org.apache.spark.sql.types.CharType
 
apply() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
 
apply(byte) - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
 
apply(double) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(long) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(int) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(BigDecimal) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(BigDecimal) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(BigInteger) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(BigInt) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(BigDecimal, int, int) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(BigDecimal, int, int) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(long, int, int) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(String) - Static method in class org.apache.spark.sql.types.Decimal
 
apply(DataType, DataType) - Static method in class org.apache.spark.sql.types.MapType
Construct a MapType object with the given key type and value type.
apply(T1, T2, T3, T4) - Static method in class org.apache.spark.sql.types.StructField
 
apply(String) - Method in class org.apache.spark.sql.types.StructType
Extracts the StructField with the given name.
apply(Set<String>) - Method in class org.apache.spark.sql.types.StructType
Returns a StructType containing StructFields of the given names, preserving the original order of fields.
apply(int) - Method in class org.apache.spark.sql.types.StructType
 
apply(T1) - Static method in class org.apache.spark.sql.types.VarcharType
 
apply() - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
 
apply(byte) - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
 
apply(T1, T2, T3, T4, T5, T6, T7, T8) - Static method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.status.api.v1.ApplicationInfo
 
apply(T1, T2) - Static method in class org.apache.spark.status.api.v1.sql.Metric
 
apply(T1, T2, T3, T4) - Static method in class org.apache.spark.status.api.v1.sql.Node
 
apply(T1) - Static method in class org.apache.spark.status.api.v1.StackTrace
 
apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.status.api.v1.ThreadStackTrace
 
apply(int) - Method in class org.apache.spark.status.RDDPartitionSeq
 
apply(String) - Static method in class org.apache.spark.storage.BlockId
 
apply(String, String, int, Option<String>) - Static method in class org.apache.spark.storage.BlockManagerId
Returns a BlockManagerId for the given configuration.
apply(ObjectInput) - Static method in class org.apache.spark.storage.BlockManagerId
 
apply(T1, T2) - Static method in class org.apache.spark.storage.BroadcastBlockId
 
apply(T1, T2) - Static method in class org.apache.spark.storage.RDDBlockId
 
apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleBlockBatchId
 
apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleBlockChunkId
 
apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleBlockId
 
apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleChecksumBlockId
 
apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleDataBlockId
 
apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleIndexBlockId
 
apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleMergedBlockId
 
apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleMergedDataBlockId
 
apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleMergedIndexBlockId
 
apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleMergedMetaBlockId
 
apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShufflePushBlockId
 
apply(boolean, boolean, boolean, boolean, int) - Static method in class org.apache.spark.storage.StorageLevel
:: DeveloperApi :: Create a new StorageLevel object.
apply(boolean, boolean, boolean, int) - Static method in class org.apache.spark.storage.StorageLevel
:: DeveloperApi :: Create a new StorageLevel object without setting useOffHeap.
apply(int, int) - Static method in class org.apache.spark.storage.StorageLevel
:: DeveloperApi :: Create a new StorageLevel object from its integer representation.
apply(ObjectInput) - Static method in class org.apache.spark.storage.StorageLevel
:: DeveloperApi :: Read StorageLevel object from ObjectInput stream.
apply(T1, T2) - Static method in class org.apache.spark.storage.StreamBlockId
 
apply(T1) - Static method in class org.apache.spark.storage.TaskResultBlockId
 
apply(T1) - Static method in class org.apache.spark.streaming.Duration
 
apply(long) - Static method in class org.apache.spark.streaming.Milliseconds
 
apply(long) - Static method in class org.apache.spark.streaming.Minutes
 
apply(T1, T2, T3, T4, T5, T6) - Static method in class org.apache.spark.streaming.scheduler.BatchInfo
 
apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
 
apply(T1, T2, T3, T4, T5, T6, T7, T8) - Static method in class org.apache.spark.streaming.scheduler.ReceiverInfo
 
apply(int) - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
 
apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
 
apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
 
apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
 
apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
 
apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
 
apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
 
apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
 
apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
 
apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
 
apply(long) - Static method in class org.apache.spark.streaming.Seconds
 
apply(T1, T2, T3) - Static method in class org.apache.spark.TaskCommitDenied
 
apply(T1, T2, T3, T4) - Static method in class org.apache.spark.TaskKilled
 
apply(int) - Static method in class org.apache.spark.TaskState
 
apply(TraversableOnce<Object>) - Static method in class org.apache.spark.util.StatCounter
Build a StatCounter from a list of values.
apply(Seq<Object>) - Static method in class org.apache.spark.util.StatCounter
Build a StatCounter from a list of values passed as variable-length arguments.
ApplyInPlace - Class in org.apache.spark.ml.ann
Implements in-place application of functions in the arrays
ApplyInPlace() - Constructor for class org.apache.spark.ml.ann.ApplyInPlace
 
applyNamespaceChanges(Map<String, String>, Seq<NamespaceChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
Apply properties changes to a map and return the result.
applyNamespaceChanges(Map<String, String>, Seq<NamespaceChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
Apply properties changes to a Java map and return the result.
applyPropertiesChanges(Map<String, String>, Seq<TableChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
Apply properties changes to a map and return the result.
applyPropertiesChanges(Map<String, String>, Seq<TableChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
Apply properties changes to a Java map and return the result.
applySchema(RDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
Use createDataFrame instead. Since 1.3.0.
applySchema(JavaRDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
Use createDataFrame instead. Since 1.3.0.
applySchema(RDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
Use createDataFrame instead. Since 1.3.0.
applySchema(JavaRDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
Deprecated.
Use createDataFrame instead. Since 1.3.0.
applySchemaChanges(StructType, Seq<TableChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
Apply schema changes to a schema and return the result.
appName() - Method in class org.apache.spark.api.java.JavaSparkContext
 
appName() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
 
appName() - Method in class org.apache.spark.SparkContext
 
appName(String) - Method in class org.apache.spark.sql.SparkSession.Builder
Sets a name for the application, which will be shown in the Spark web UI.
approx_count_distinct(Column) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the approximate number of distinct items in a group.
approx_count_distinct(String) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the approximate number of distinct items in a group.
approx_count_distinct(Column, double) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the approximate number of distinct items in a group.
approx_count_distinct(String, double) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the approximate number of distinct items in a group.
approxCountDistinct(Column) - Static method in class org.apache.spark.sql.functions
Deprecated.
Use approx_count_distinct. Since 2.1.0.
approxCountDistinct(String) - Static method in class org.apache.spark.sql.functions
Deprecated.
Use approx_count_distinct. Since 2.1.0.
approxCountDistinct(Column, double) - Static method in class org.apache.spark.sql.functions
Deprecated.
Use approx_count_distinct. Since 2.1.0.
approxCountDistinct(String, double) - Static method in class org.apache.spark.sql.functions
Deprecated.
Use approx_count_distinct. Since 2.1.0.
ApproxHist() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
 
ApproximateEvaluator<U,R> - Interface in org.apache.spark.partial
An object that computes a function incrementally by merging in results of type U from multiple tasks.
approxQuantile(String, double[], double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
Calculates the approximate quantiles of a numerical column of a DataFrame.
approxQuantile(String[], double[], double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
Calculates the approximate quantiles of numerical columns of a DataFrame.
appSparkVersion() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
AppStatusUtils - Class in org.apache.spark.status
 
AppStatusUtils() - Constructor for class org.apache.spark.status.AppStatusUtils
 
archives() - Method in class org.apache.spark.SparkContext
 
AreaUnderCurve - Class in org.apache.spark.mllib.evaluation
Computes the area under the curve (AUC) using the trapezoidal rule.
AreaUnderCurve() - Constructor for class org.apache.spark.mllib.evaluation.AreaUnderCurve
 
areaUnderPR() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Computes the area under the precision-recall curve.
areaUnderROC() - Method in interface org.apache.spark.ml.classification.BinaryClassificationSummary
Computes the area under the receiver operating characteristic (ROC) curve.
areaUnderROC() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
 
areaUnderROC() - Method in class org.apache.spark.ml.classification.BinaryRandomForestClassificationSummaryImpl
 
areaUnderROC() - Method in class org.apache.spark.ml.classification.FMClassificationSummaryImpl
 
areaUnderROC() - Method in class org.apache.spark.ml.classification.LinearSVCSummaryImpl
 
areaUnderROC() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Computes the area under the receiver operating characteristic (ROC) curve.
argmax() - Method in class org.apache.spark.ml.linalg.DenseVector
 
argmax() - Method in class org.apache.spark.ml.linalg.SparseVector
 
argmax() - Method in interface org.apache.spark.ml.linalg.Vector
Find the index of a maximal element.
argmax() - Method in class org.apache.spark.mllib.linalg.DenseVector
 
argmax() - Method in class org.apache.spark.mllib.linalg.SparseVector
 
argmax() - Method in interface org.apache.spark.mllib.linalg.Vector
Find the index of a maximal element.
arguments() - Method in interface org.apache.spark.sql.connector.expressions.Transform
Returns the arguments passed to the transform function.
arithmeticOverflowError(ArithmeticException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
arithmeticOverflowError(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
ARPACK - Class in org.apache.spark.mllib.linalg
ARPACK routines for MLlib's vectors and matrices.
ARPACK() - Constructor for class org.apache.spark.mllib.linalg.ARPACK
 
array(DataType) - Method in class org.apache.spark.sql.ColumnName
Creates a new StructField of type array.
array(Column...) - Static method in class org.apache.spark.sql.functions
Creates a new array column.
array(String, String...) - Static method in class org.apache.spark.sql.functions
Creates a new array column.
array(Seq<Column>) - Static method in class org.apache.spark.sql.functions
Creates a new array column.
array(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
Creates a new array column.
array() - Method in class org.apache.spark.sql.vectorized.ColumnarArray
 
array_contains(Column, Object) - Static method in class org.apache.spark.sql.functions
Returns null if the array is null, true if the array contains value, and false otherwise.
array_distinct(Column) - Static method in class org.apache.spark.sql.functions
Removes duplicate values from the array.
array_except(Column, Column) - Static method in class org.apache.spark.sql.functions
Returns an array of the elements in the first array but not in the second array, without duplicates.
array_intersect(Column, Column) - Static method in class org.apache.spark.sql.functions
Returns an array of the elements in the intersection of the given two arrays, without duplicates.
array_join(Column, String, String) - Static method in class org.apache.spark.sql.functions
Concatenates the elements of column using the delimiter.
array_join(Column, String) - Static method in class org.apache.spark.sql.functions
Concatenates the elements of column using the delimiter.
array_max(Column) - Static method in class org.apache.spark.sql.functions
Returns the maximum value in the array.
array_min(Column) - Static method in class org.apache.spark.sql.functions
Returns the minimum value in the array.
array_position(Column, Object) - Static method in class org.apache.spark.sql.functions
Locates the position of the first occurrence of the value in the given array as long.
array_remove(Column, Object) - Static method in class org.apache.spark.sql.functions
Remove all elements that equal to element from the given array.
array_repeat(Column, Column) - Static method in class org.apache.spark.sql.functions
Creates an array containing the left argument repeated the number of times given by the right argument.
array_repeat(Column, int) - Static method in class org.apache.spark.sql.functions
Creates an array containing the left argument repeated the number of times given by the right argument.
array_sort(Column) - Static method in class org.apache.spark.sql.functions
Sorts the input array in ascending order.
array_to_vector(Column) - Static method in class org.apache.spark.ml.functions
Converts a column of array of numeric type into a column of dense vectors in MLlib.
array_union(Column, Column) - Static method in class org.apache.spark.sql.functions
Returns an array of the elements in the union of the given two arrays, without duplicates.
arrayComponentTypeUnsupportedError(Class<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
arrayLengthGt(double) - Static method in class org.apache.spark.ml.param.ParamValidators
Check that the array length is greater than lowerBound.
arrays_overlap(Column, Column) - Static method in class org.apache.spark.sql.functions
Returns true if a1 and a2 have at least one non-null element in common.
arrays_zip(Column...) - Static method in class org.apache.spark.sql.functions
Returns a merged array of structs in which the N-th struct contains all N-th values of input arrays.
arrays_zip(Seq<Column>) - Static method in class org.apache.spark.sql.functions
Returns a merged array of structs in which the N-th struct contains all N-th values of input arrays.
ArrayType - Class in org.apache.spark.sql.types
 
ArrayType(DataType, boolean) - Constructor for class org.apache.spark.sql.types.ArrayType
 
arrayValues() - Method in class org.apache.spark.storage.memory.DeserializedValuesHolder
 
ArrowColumnVector - Class in org.apache.spark.sql.vectorized
A column vector backed by Apache Arrow.
ArrowColumnVector(ValueVector) - Constructor for class org.apache.spark.sql.vectorized.ArrowColumnVector
 
ArrowUtils - Class in org.apache.spark.sql.util
 
ArrowUtils() - Constructor for class org.apache.spark.sql.util.ArrowUtils
 
as(Encoder<U>) - Method in class org.apache.spark.sql.Column
Provides a type hint about the expected return value of this column.
as(String) - Method in class org.apache.spark.sql.Column
Gives the column an alias.
as(Seq<String>) - Method in class org.apache.spark.sql.Column
(Scala-specific) Assigns the given aliases to the results of a table generating function.
as(String[]) - Method in class org.apache.spark.sql.Column
Assigns the given aliases to the results of a table generating function.
as(Symbol) - Method in class org.apache.spark.sql.Column
Gives the column an alias.
as(String, Metadata) - Method in class org.apache.spark.sql.Column
Gives the column an alias with metadata.
as(Encoder<U>) - Method in class org.apache.spark.sql.Dataset
Returns a new Dataset where each record has been mapped on to the specified type.
as(String) - Method in class org.apache.spark.sql.Dataset
Returns a new Dataset with an alias set.
as(Symbol) - Method in class org.apache.spark.sql.Dataset
(Scala-specific) Returns a new Dataset with an alias set.
as(Encoder<K>, Encoder<T>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
Returns a KeyValueGroupedDataset where the data is grouped by the grouping expressions of current RelationalGroupedDataset.
asBinary() - Method in interface org.apache.spark.ml.classification.LogisticRegressionSummary
Convenient method for casting to binary logistic regression summary.
asBinary() - Method in interface org.apache.spark.ml.classification.RandomForestClassificationSummary
Convenient method for casting to BinaryRandomForestClassificationSummary.
asBreeze() - Method in interface org.apache.spark.ml.linalg.Matrix
Converts to a breeze matrix.
asBreeze() - Method in interface org.apache.spark.ml.linalg.Vector
Converts the instance to a breeze vector.
asBreeze() - Method in interface org.apache.spark.mllib.linalg.Matrix
Converts to a breeze matrix.
asBreeze() - Method in interface org.apache.spark.mllib.linalg.Vector
Converts the instance to a breeze vector.
asc() - Method in class org.apache.spark.sql.Column
Returns a sort expression based on ascending order of the column.
asc(String) - Static method in class org.apache.spark.sql.functions
Returns a sort expression based on ascending order of the column.
asc_nulls_first() - Method in class org.apache.spark.sql.Column
Returns a sort expression based on ascending order of the column, and null values return before non-null values.
asc_nulls_first(String) - Static method in class org.apache.spark.sql.functions
Returns a sort expression based on ascending order of the column, and null values return before non-null values.
asc_nulls_last() - Method in class org.apache.spark.sql.Column
Returns a sort expression based on ascending order of the column, and null values appear after non-null values.
asc_nulls_last(String) - Static method in class org.apache.spark.sql.functions
Returns a sort expression based on ascending order of the column, and null values appear after non-null values.
asCaseSensitiveMap() - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
Returns the original case-sensitive map.
ascii(Column) - Static method in class org.apache.spark.sql.functions
Computes the numeric value of the first character of the string column, and returns the result as an int column.
asFunctionCatalog() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
 
asFunctionIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
 
asFunctionIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
 
asIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
 
asin(Column) - Static method in class org.apache.spark.sql.functions
 
asin(String) - Static method in class org.apache.spark.sql.functions
 
asinh(Column) - Static method in class org.apache.spark.sql.functions
 
asinh(String) - Static method in class org.apache.spark.sql.functions
 
asInteraction() - Static method in class org.apache.spark.ml.feature.Dot
 
asInteraction() - Method in interface org.apache.spark.ml.feature.InteractableTerm
Convert to ColumnInteraction to wrap all interactions.
asIterator() - Method in class org.apache.spark.serializer.DeserializationStream
Read the elements of this stream through an iterator.
asJavaPairRDD() - Method in class org.apache.spark.api.r.PairwiseRRDD
 
asJavaRDD() - Method in class org.apache.spark.api.r.RRDD
 
asJavaRDD() - Method in class org.apache.spark.api.r.StringRRDD
 
ask(Object) - Method in interface org.apache.spark.api.plugin.PluginContext
Send an RPC to the plugin's driver-side component.
asKeyValueIterator() - Method in class org.apache.spark.serializer.DeserializationStream
Read the elements of this stream through an iterator over key-value pairs.
AskPermissionToCommitOutput - Class in org.apache.spark.scheduler
 
AskPermissionToCommitOutput(int, int, int, int) - Constructor for class org.apache.spark.scheduler.AskPermissionToCommitOutput
 
askRpcTimeout(SparkConf) - Static method in class org.apache.spark.util.RpcUtils
Returns the default Spark timeout to use for RPC ask operations.
askStandaloneSchedulerToShutDownExecutorsError(Exception) - Static method in class org.apache.spark.errors.SparkCoreErrors
 
askStorageEndpoints() - Method in class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
 
askStorageEndpoints() - Method in class org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
 
asML() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
 
asML() - Method in class org.apache.spark.mllib.linalg.DenseVector
 
asML() - Method in interface org.apache.spark.mllib.linalg.Matrix
Convert this matrix to the new mllib-local representation.
asML() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
 
asML() - Method in class org.apache.spark.mllib.linalg.SparseVector
 
asML() - Method in interface org.apache.spark.mllib.linalg.Vector
Convert this vector to the new mllib-local representation.
asMultipart() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.FunctionIdentifierHelper
 
asMultipartIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
 
asNamespaceCatalog() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
 
asNondeterministic() - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
Updates UserDefinedFunction to nondeterministic.
asNonNullable() - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
Updates UserDefinedFunction to non-nullable.
asNullable() - Method in class org.apache.spark.sql.types.ObjectType
 
asRDDId() - Method in class org.apache.spark.storage.BlockId
 
assert_true(Column) - Static method in class org.apache.spark.sql.functions
Returns null if the condition is true, and throws an exception otherwise.
assert_true(Column, Column) - Static method in class org.apache.spark.sql.functions
Returns null if the condition is true; throws an exception with the error message otherwise.
assertExceptionMsg(Throwable, String, boolean, ClassTag<E>) - Static method in class org.apache.spark.TestUtils
Asserts that exception message contains the message.
assertNotSpilled(SparkContext, String, Function0<BoxedUnit>) - Static method in class org.apache.spark.TestUtils
Run some code involving jobs submitted to the given context and assert that the jobs did not spill.
assertSpilled(SparkContext, String, Function0<BoxedUnit>) - Static method in class org.apache.spark.TestUtils
Run some code involving jobs submitted to the given context and assert that the jobs spilled.
assignClusters(Dataset<?>) - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
Run the PIC algorithm and returns a cluster assignment for each input vertex.
assignedAddrs() - Method in interface org.apache.spark.resource.ResourceAllocator
Sequence of currently assigned resource addresses.
Assignment(long, int) - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
 
Assignment$() - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment$
 
assignments() - Method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel
 
AssociationRules - Class in org.apache.spark.ml.fpm
 
AssociationRules() - Constructor for class org.apache.spark.ml.fpm.AssociationRules
 
associationRules() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
Get association rules fitted using the minConfidence.
AssociationRules - Class in org.apache.spark.mllib.fpm
Generates association rules from a RDD[FreqItemset[Item}.
AssociationRules() - Constructor for class org.apache.spark.mllib.fpm.AssociationRules
Constructs a default instance with default parameters {minConfidence = 0.8}.
AssociationRules.Rule<Item> - Class in org.apache.spark.mllib.fpm
An association rule between sets of items.
asTableCatalog() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
 
asTableIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
 
asTableIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
 
AsTableIdentifier() - Method in interface org.apache.spark.sql.connector.catalog.LookupCatalog
 
AsTableIdentifier() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier
 
AsTableIdentifier$() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier$
 
asTerms() - Static method in class org.apache.spark.ml.feature.Dot
 
asTerms() - Static method in class org.apache.spark.ml.feature.EmptyTerm
 
asTerms() - Method in interface org.apache.spark.ml.feature.Term
Default representation of a single Term as a part of summed terms.
asTransform() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.BucketSpecHelper
 
asTransforms() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.PartitionTypeHelper
 
AsyncEventQueue - Class in org.apache.spark.scheduler
An asynchronous queue for events.
AsyncEventQueue(String, SparkConf, LiveListenerBusMetrics, LiveListenerBus) - Constructor for class org.apache.spark.scheduler.AsyncEventQueue
 
AsyncRDDActions<T> - Class in org.apache.spark.rdd
A set of asynchronous RDD actions available through an implicit conversion.
AsyncRDDActions(RDD<T>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.AsyncRDDActions
 
atan(Column) - Static method in class org.apache.spark.sql.functions
 
atan(String) - Static method in class org.apache.spark.sql.functions
 
atan2(Column, Column) - Static method in class org.apache.spark.sql.functions
 
atan2(Column, String) - Static method in class org.apache.spark.sql.functions
 
atan2(String, Column) - Static method in class org.apache.spark.sql.functions
 
atan2(String, String) - Static method in class org.apache.spark.sql.functions
 
atan2(Column, double) - Static method in class org.apache.spark.sql.functions
 
atan2(String, double) - Static method in class org.apache.spark.sql.functions
 
atan2(double, Column) - Static method in class org.apache.spark.sql.functions
 
atan2(double, String) - Static method in class org.apache.spark.sql.functions
 
atanh(Column) - Static method in class org.apache.spark.sql.functions
 
atanh(String) - Static method in class org.apache.spark.sql.functions
 
attempt() - Method in class org.apache.spark.status.api.v1.TaskData
 
ATTEMPT() - Static method in class org.apache.spark.status.TaskIndexNames
 
attemptId() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
attemptId() - Method in interface org.apache.spark.status.api.v1.BaseAppResource
 
attemptId() - Method in class org.apache.spark.status.api.v1.StageData
 
attemptNumber() - Method in class org.apache.spark.BarrierTaskContext
 
attemptNumber() - Method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
 
attemptNumber() - Method in class org.apache.spark.scheduler.StageInfo
 
attemptNumber() - Method in class org.apache.spark.scheduler.TaskInfo
 
attemptNumber() - Method in class org.apache.spark.TaskCommitDenied
 
attemptNumber() - Method in class org.apache.spark.TaskContext
How many times this task has been attempted.
attempts() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
 
AtTimestamp(Date) - Constructor for class org.apache.spark.streaming.kinesis.KinesisInitialPositions.AtTimestamp
 
attr() - Method in class org.apache.spark.graphx.Edge
 
attr() - Method in class org.apache.spark.graphx.EdgeContext
The attribute associated with the edge.
attr() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
 
Attribute - Class in org.apache.spark.ml.attribute
Abstract class for ML attributes.
Attribute() - Constructor for class org.apache.spark.ml.attribute.Attribute
 
attribute() - Method in class org.apache.spark.sql.sources.EqualNullSafe
 
attribute() - Method in class org.apache.spark.sql.sources.EqualTo
 
attribute() - Method in class org.apache.spark.sql.sources.GreaterThan
 
attribute() - Method in class org.apache.spark.sql.sources.GreaterThanOrEqual
 
attribute() - Method in class org.apache.spark.sql.sources.In
 
attribute() - Method in class org.apache.spark.sql.sources.IsNotNull
 
attribute() - Method in class org.apache.spark.sql.sources.IsNull
 
attribute() - Method in class org.apache.spark.sql.sources.LessThan
 
attribute() - Method in class org.apache.spark.sql.sources.LessThanOrEqual
 
attribute() - Method in class org.apache.spark.sql.sources.StringContains
 
attribute() - Method in class org.apache.spark.sql.sources.StringEndsWith
 
attribute() - Method in class org.apache.spark.sql.sources.StringStartsWith
 
AttributeFactory - Interface in org.apache.spark.ml.attribute
Trait for ML attribute factories.
AttributeGroup - Class in org.apache.spark.ml.attribute
Attributes that describe a vector ML column.
AttributeGroup(String) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
Creates an attribute group without attribute info.
AttributeGroup(String, int) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
Creates an attribute group knowing only the number of attributes.
AttributeGroup(String, Attribute[]) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
Creates an attribute group with attributes.
AttributeKeys - Class in org.apache.spark.ml.attribute
Keys used to store attributes.
AttributeKeys() - Constructor for class org.apache.spark.ml.attribute.AttributeKeys
 
attributeNameSyntaxError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
attributeNotFoundError(String, LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
attributes() - Method in class org.apache.spark.ml.attribute.AttributeGroup
Optional array of attributes.
ATTRIBUTES() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
 
attributes() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
 
attributes() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
 
attributes() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
 
attributesForTypeUnsupportedError(ScalaReflection.Schema) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
AttributeType - Class in org.apache.spark.ml.attribute
An enum-like type for attribute types: AttributeType$.Numeric, AttributeType$.Nominal, and AttributeType$.Binary.
AttributeType(String) - Constructor for class org.apache.spark.ml.attribute.AttributeType
 
attrType() - Method in class org.apache.spark.ml.attribute.Attribute
Attribute type.
attrType() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
 
attrType() - Method in class org.apache.spark.ml.attribute.NominalAttribute
 
attrType() - Method in class org.apache.spark.ml.attribute.NumericAttribute
 
attrType() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
 
available() - Method in class org.apache.spark.io.NioBufferedFileInputStream
 
available() - Method in class org.apache.spark.io.ReadAheadInputStream
 
available() - Method in class org.apache.spark.storage.BufferReleasingInputStream
 
availableAddrs() - Method in interface org.apache.spark.resource.ResourceAllocator
Sequence of currently available resource addresses.
AvailableNow() - Static method in class org.apache.spark.sql.streaming.Trigger
A trigger that processes all available data at the start of the query in one or multiple batches, then terminates the query.
Average() - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
 
Avg - Class in org.apache.spark.sql.connector.expressions.aggregate
An aggregate function that returns the mean of all the values in a group.
Avg(Expression, boolean) - Constructor for class org.apache.spark.sql.connector.expressions.aggregate.Avg
 
avg(MapFunction<T, Double>) - Static method in class org.apache.spark.sql.expressions.javalang.typed
Deprecated.
Average aggregate function.
avg(Function1<IN, Object>) - Static method in class org.apache.spark.sql.expressions.scalalang.typed
Deprecated.
Average aggregate function.
avg(Column) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the average of the values in a group.
avg(String) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns the average of the values in a group.
avg(String...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
Compute the mean value for each numeric columns for each group.
avg(Seq<String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
Compute the mean value for each numeric columns for each group.
avg() - Method in class org.apache.spark.util.DoubleAccumulator
Returns the average of elements added to the accumulator.
avg() - Method in class org.apache.spark.util.LongAccumulator
Returns the average of elements added to the accumulator.
avgEventRate() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
 
avgInputRate() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
avgMetrics() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
 
avgProcessingTime() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
avgSchedulingDelay() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
avgTotalDelay() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
AvroMatchedField$() - Constructor for class org.apache.spark.sql.avro.AvroUtils.AvroMatchedField$
 
AvroSchemaHelper(Schema, StructType, Seq<String>, Seq<String>, boolean) - Constructor for class org.apache.spark.sql.avro.AvroUtils.AvroSchemaHelper
 
AvroUtils - Class in org.apache.spark.sql.avro
 
AvroUtils() - Constructor for class org.apache.spark.sql.avro.AvroUtils
 
AvroUtils.AvroMatchedField$ - Class in org.apache.spark.sql.avro
 
AvroUtils.AvroSchemaHelper - Class in org.apache.spark.sql.avro
Helper class to perform field lookup/matching on Avro schemas.
AvroUtils.RowReader - Interface in org.apache.spark.sql.avro
 
awaitAnyTermination() - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
Wait until any of the queries on the associated SQLContext has terminated since the creation of the context, or since resetTerminated() was called.
awaitAnyTermination(long) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
Wait until any of the queries on the associated SQLContext has terminated since the creation of the context, or since resetTerminated() was called.
awaitReady(Awaitable<T>, Duration) - Static method in class org.apache.spark.util.ThreadUtils
Preferred alternative to Await.ready().
awaitResult(Awaitable<T>, Duration) - Static method in class org.apache.spark.util.ThreadUtils
Preferred alternative to Await.result().
awaitResult(Future<T>, Duration) - Static method in class org.apache.spark.util.ThreadUtils
 
awaitTermination() - Method in interface org.apache.spark.sql.streaming.StreamingQuery
Waits for the termination of this query, either by query.stop() or by an exception.
awaitTermination(long) - Method in interface org.apache.spark.sql.streaming.StreamingQuery
Waits for the termination of this query, either by query.stop() or by an exception.
awaitTermination() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Wait for the execution to stop.
awaitTermination() - Method in class org.apache.spark.streaming.StreamingContext
Wait for the execution to stop.
awaitTerminationOrTimeout(long) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Wait for the execution to stop.
awaitTerminationOrTimeout(long) - Method in class org.apache.spark.streaming.StreamingContext
Wait for the execution to stop.
axpy(double, Vector, Vector) - Static method in class org.apache.spark.ml.linalg.BLAS
y += a * x
axpy(double, Vector, Vector) - Static method in class org.apache.spark.mllib.linalg.BLAS
y += a * x

B

BACKUP_STANDALONE_MASTER_PREFIX() - Static method in class org.apache.spark.util.Utils
An identifier that backup masters use in their responses.
balanceSlack() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
 
barrier() - Method in class org.apache.spark.BarrierTaskContext
:: Experimental :: Sets a global barrier and waits until all tasks in this stage hit this barrier.
barrier() - Method in class org.apache.spark.rdd.RDD
:: Experimental :: Marks the current stage as a barrier stage, where Spark must launch all tasks together.
BARRIER() - Static method in class org.apache.spark.RequestMethod
 
BarrierCoordinatorMessage - Interface in org.apache.spark
 
barrierStageWithDynamicAllocationError() - Static method in class org.apache.spark.errors.SparkCoreErrors
 
barrierStageWithRDDChainPatternError() - Static method in class org.apache.spark.errors.SparkCoreErrors
 
BarrierTaskContext - Class in org.apache.spark
:: Experimental :: A TaskContext with extra contextual info and tooling for tasks in a barrier stage.
BarrierTaskInfo - Class in org.apache.spark
:: Experimental :: Carries all task infos of a barrier task.
base64(Column) - Static method in class org.apache.spark.sql.functions
Computes the BASE64 encoding of a binary column and returns it as a string column.
BaseAppResource - Interface in org.apache.spark.status.api.v1
Base class for resource handlers that use app-specific data.
baseOn(ParamPair<?>...) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Sets the given parameters in this grid to fixed values.
baseOn(ParamMap) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Sets the given parameters in this grid to fixed values.
baseOn(Seq<ParamPair<?>>) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Sets the given parameters in this grid to fixed values.
BaseReadWrite - Interface in org.apache.spark.ml.util
Trait for MLWriter and MLReader.
BaseRelation - Class in org.apache.spark.sql.sources
Represents a collection of tuples with a known schema.
BaseRelation() - Constructor for class org.apache.spark.sql.sources.BaseRelation
 
baseRelationToDataFrame(BaseRelation) - Method in class org.apache.spark.sql.SparkSession
Convert a BaseRelation created for external data sources into a DataFrame.
baseRelationToDataFrame(BaseRelation) - Method in class org.apache.spark.sql.SQLContext
 
BaseRRDD<T,U> - Class in org.apache.spark.api.r
 
BaseRRDD(RDD<T>, int, byte[], String, String, byte[], Broadcast<Object>[], ClassTag<T>, ClassTag<U>) - Constructor for class org.apache.spark.api.r.BaseRRDD
 
BaseStreamingAppResource - Interface in org.apache.spark.status.api.v1.streaming
Base class for streaming API handlers, provides easy access to the streaming listener that holds the app's information.
BasicBlockReplicationPolicy - Class in org.apache.spark.storage
 
BasicBlockReplicationPolicy() - Constructor for class org.apache.spark.storage.BasicBlockReplicationPolicy
 
basicCredentials(String, String) - Method in class org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
Use a basic AWS keypair for long-lived authorization.
basicSparkPage(HttpServletRequest, Function0<Seq<Node>>, String, boolean) - Static method in class org.apache.spark.ui.UIUtils
Returns a page with the spark css/js and a simple format.
Batch - Interface in org.apache.spark.sql.connector.read
A physical representation of a data source scan for batch queries.
batchDuration() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
 
batchDuration() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
 
batchDuration() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
 
BATCHES() - Static method in class org.apache.spark.mllib.clustering.StreamingKMeans
 
batchId() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
 
batchId() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
 
BatchInfo - Class in org.apache.spark.status.api.v1.streaming
 
BatchInfo - Class in org.apache.spark.streaming.scheduler
:: DeveloperApi :: Class having information on completed batches.
BatchInfo(Time, Map<Object, StreamInputInfo>, long, Option<Object>, Option<Object>, Map<Object, OutputOperationInfo>) - Constructor for class org.apache.spark.streaming.scheduler.BatchInfo
 
batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
 
batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
 
batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
 
batchInfos() - Method in class org.apache.spark.streaming.scheduler.StatsReportListener
 
batchMetadataFileNotFoundError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
BatchStatus - Enum in org.apache.spark.status.api.v1.streaming
 
batchTime() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
 
batchTime() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
 
batchTime() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
 
BatchWrite - Interface in org.apache.spark.sql.connector.write
An interface that defines how to write the data to data source for batch processing.
batchWriteCapabilityError(Table, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
bbos() - Method in class org.apache.spark.storage.memory.SerializedValuesHolder
 
bean(Class<T>) - Static method in class org.apache.spark.sql.Encoders
Creates an encoder for Java Bean of type T.
beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
 
beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
 
beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.H2Dialect
 
beforeFetch(Connection, Map<String, String>) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
Override connection specific properties to run before a select is made.
beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
 
beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
 
beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
 
beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
 
beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
 
beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
 
BernoulliCellSampler<T> - Class in org.apache.spark.util.random
:: DeveloperApi :: A sampler based on Bernoulli trials for partitioning a data sequence.
BernoulliCellSampler(double, double, boolean) - Constructor for class org.apache.spark.util.random.BernoulliCellSampler
 
BernoulliSampler<T> - Class in org.apache.spark.util.random
:: DeveloperApi :: A sampler based on Bernoulli trials.
BernoulliSampler(double, ClassTag<T>) - Constructor for class org.apache.spark.util.random.BernoulliSampler
 
bestModel() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
 
bestModel() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
 
beta() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
The beta value, which controls precision vs recall weighting, used in "weightedFMeasure", "fMeasureByLabel".
beta() - Method in class org.apache.spark.mllib.random.WeibullGenerator
 
between(Object, Object) - Method in class org.apache.spark.sql.Column
True if the current column is between the lower bound and upper bound, inclusive.
bin(Column) - Static method in class org.apache.spark.sql.functions
An expression that returns the string representation of the binary value of the given long column.
bin(String) - Static method in class org.apache.spark.sql.functions
An expression that returns the string representation of the binary value of the given long column.
Binarizer - Class in org.apache.spark.ml.feature
Binarize a column of continuous features given a threshold.
Binarizer(String) - Constructor for class org.apache.spark.ml.feature.Binarizer
 
Binarizer() - Constructor for class org.apache.spark.ml.feature.Binarizer
 
Binary() - Static method in class org.apache.spark.ml.attribute.AttributeType
Binary type.
binary() - Method in class org.apache.spark.ml.feature.CountVectorizer
 
binary() - Method in class org.apache.spark.ml.feature.CountVectorizerModel
 
binary() - Method in interface org.apache.spark.ml.feature.CountVectorizerParams
Binary toggle to control the output vector values.
binary() - Method in class org.apache.spark.ml.feature.HashingTF
Binary toggle to control term frequency counts.
binary() - Method in class org.apache.spark.sql.ColumnName
Creates a new StructField of type binary.
BINARY() - Static method in class org.apache.spark.sql.Encoders
An encoder for arrays of bytes.
binaryArithmeticCauseOverflowError(short, String, short) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
BinaryAttribute - Class in org.apache.spark.ml.attribute
A binary attribute.
BinaryClassificationEvaluator - Class in org.apache.spark.ml.evaluation
Evaluator for binary classification, which expects input columns rawPrediction, label and an optional weight column.
BinaryClassificationEvaluator(String) - Constructor for class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
BinaryClassificationEvaluator() - Constructor for class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
BinaryClassificationMetricComputer - Interface in org.apache.spark.mllib.evaluation.binary
Trait for a binary classification evaluation metric computer.
BinaryClassificationMetrics - Class in org.apache.spark.mllib.evaluation
Evaluator for binary classification.
BinaryClassificationMetrics(RDD<? extends Product>, int) - Constructor for class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
 
BinaryClassificationMetrics(RDD<Tuple2<Object, Object>>) - Constructor for class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
Defaults numBins to 0.
BinaryClassificationSummary - Interface in org.apache.spark.ml.classification
Abstraction for binary classification results for a given model.
binaryColumn(String[]) - Static method in class org.apache.parquet.filter2.predicate.SparkFilterApi
 
BinaryConfusionMatrix - Interface in org.apache.spark.mllib.evaluation.binary
Trait for a binary confusion matrix.
binaryFiles(String, int) - Method in class org.apache.spark.api.java.JavaSparkContext
Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array.
binaryFiles(String) - Method in class org.apache.spark.api.java.JavaSparkContext
Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array.
binaryFiles(String, int) - Method in class org.apache.spark.SparkContext
Get an RDD for a Hadoop-readable dataset as PortableDataStream for each file (useful for binary data)
binaryLabelValidator() - Static method in class org.apache.spark.mllib.util.DataValidators
Function to check if labels used for classification are either zero or one.
BinaryLogisticRegressionSummary - Interface in org.apache.spark.ml.classification
Abstraction for binary logistic regression results for a given model.
BinaryLogisticRegressionSummaryImpl - Class in org.apache.spark.ml.classification
Binary logistic regression results for a given model.
BinaryLogisticRegressionSummaryImpl(Dataset<Row>, String, String, String, String, String) - Constructor for class org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
 
BinaryLogisticRegressionTrainingSummary - Interface in org.apache.spark.ml.classification
Abstraction for binary logistic regression training results.
BinaryLogisticRegressionTrainingSummaryImpl - Class in org.apache.spark.ml.classification
Binary logistic regression training results.
BinaryLogisticRegressionTrainingSummaryImpl(Dataset<Row>, String, String, String, String, String, double[]) - Constructor for class org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummaryImpl
 
BinaryRandomForestClassificationSummary - Interface in org.apache.spark.ml.classification
Abstraction for BinaryRandomForestClassification results for a given model.
BinaryRandomForestClassificationSummaryImpl - Class in org.apache.spark.ml.classification
Binary RandomForestClassification for a given model.
BinaryRandomForestClassificationSummaryImpl(Dataset<Row>, String, String, String, String) - Constructor for class org.apache.spark.ml.classification.BinaryRandomForestClassificationSummaryImpl
 
BinaryRandomForestClassificationTrainingSummary - Interface in org.apache.spark.ml.classification
Abstraction for BinaryRandomForestClassification training results.
BinaryRandomForestClassificationTrainingSummaryImpl - Class in org.apache.spark.ml.classification
Binary RandomForestClassification training results.
BinaryRandomForestClassificationTrainingSummaryImpl(Dataset<Row>, String, String, String, String, double[]) - Constructor for class org.apache.spark.ml.classification.BinaryRandomForestClassificationTrainingSummaryImpl
 
binaryRecords(String, int) - Method in class org.apache.spark.api.java.JavaSparkContext
Load data from a flat binary file, assuming the length of each record is constant.
binaryRecords(String, int, Configuration) - Method in class org.apache.spark.SparkContext
Load data from a flat binary file, assuming the length of each record is constant.
binaryRecordsStream(String, int) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them as flat binary files with fixed record lengths, yielding byte arrays
binaryRecordsStream(String, int) - Method in class org.apache.spark.streaming.StreamingContext
Create an input stream that monitors a Hadoop-compatible filesystem for new files and reads them as flat binary files, assuming a fixed length per record, generating one byte array per record.
BinarySample - Class in org.apache.spark.mllib.stat.test
Class that represents the group and value of a sample.
BinarySample(boolean, double) - Constructor for class org.apache.spark.mllib.stat.test.BinarySample
 
binarySummary() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
Gets summary of model on training set.
binarySummary() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
Gets summary of model on training set.
BinaryType - Class in org.apache.spark.sql.types
The data type representing Array[Byte] values.
BinaryType() - Constructor for class org.apache.spark.sql.types.BinaryType
 
BinaryType - Static variable in class org.apache.spark.sql.types.DataTypes
Gets the BinaryType object.
bind(StructType) - Method in interface org.apache.spark.sql.connector.catalog.functions.UnboundFunction
Bind this function to an input type.
Binomial$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
 
BinomialBounds - Class in org.apache.spark.util.random
Utility functions that help us determine bounds on adjusted sampling rate to guarantee exact sample size with high confidence when sampling without replacement.
BinomialBounds() - Constructor for class org.apache.spark.util.random.BinomialBounds
 
BisectingKMeans - Class in org.apache.spark.ml.clustering
A bisecting k-means algorithm based on the paper "A comparison of document clustering techniques" by Steinbach, Karypis, and Kumar, with modification to fit Spark.
BisectingKMeans(String) - Constructor for class org.apache.spark.ml.clustering.BisectingKMeans
 
BisectingKMeans() - Constructor for class org.apache.spark.ml.clustering.BisectingKMeans
 
BisectingKMeans - Class in org.apache.spark.mllib.clustering
A bisecting k-means algorithm based on the paper "A comparison of document clustering techniques" by Steinbach, Karypis, and Kumar, with modification to fit Spark.
BisectingKMeans() - Constructor for class org.apache.spark.mllib.clustering.BisectingKMeans
Constructs with the default configuration
BisectingKMeansModel - Class in org.apache.spark.ml.clustering
Model fitted by BisectingKMeans.
BisectingKMeansModel - Class in org.apache.spark.mllib.clustering
Clustering model produced by BisectingKMeans.
BisectingKMeansModel(ClusteringTreeNode) - Constructor for class org.apache.spark.mllib.clustering.BisectingKMeansModel
 
BisectingKMeansModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.clustering
 
BisectingKMeansModel.SaveLoadV2_0$ - Class in org.apache.spark.mllib.clustering
 
BisectingKMeansModel.SaveLoadV3_0$ - Class in org.apache.spark.mllib.clustering
 
BisectingKMeansParams - Interface in org.apache.spark.ml.clustering
Common params for BisectingKMeans and BisectingKMeansModel
BisectingKMeansSummary - Class in org.apache.spark.ml.clustering
Summary of BisectingKMeans.
bit_length(Column) - Static method in class org.apache.spark.sql.functions
Calculates the bit length for the specified string column.
bitSize() - Method in class org.apache.spark.util.sketch.BloomFilter
Returns the number of bits in the underlying bit array.
bitwise_not(Column) - Static method in class org.apache.spark.sql.functions
Computes bitwise NOT (~) of a number.
bitwiseAND(Object) - Method in class org.apache.spark.sql.Column
Compute bitwise AND of this expression with another expression.
bitwiseNOT(Column) - Static method in class org.apache.spark.sql.functions
Deprecated.
Use bitwise_not. Since 3.2.0.
bitwiseOR(Object) - Method in class org.apache.spark.sql.Column
Compute bitwise OR of this expression with another expression.
bitwiseXOR(Object) - Method in class org.apache.spark.sql.Column
Compute bitwise XOR of this expression with another expression.
blacklistedInStages() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
Deprecated.
use excludedInStages instead. Since 3.1.0.
BLAS - Class in org.apache.spark.ml.linalg
BLAS routines for MLlib's vectors and matrices.
BLAS() - Constructor for class org.apache.spark.ml.linalg.BLAS
 
BLAS - Class in org.apache.spark.mllib.linalg
BLAS routines for MLlib's vectors and matrices.
BLAS() - Constructor for class org.apache.spark.mllib.linalg.BLAS
 
BlockData - Interface in org.apache.spark.storage
Abstracts away how blocks are stored and provides different ways to read the underlying block data.
blockDoesNotExistError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
 
blockedByLock() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
 
blockedByThreadId() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
 
BlockEvictionHandler - Interface in org.apache.spark.storage.memory
 
BlockGeneratorListener - Interface in org.apache.spark.streaming.receiver
Listener object for BlockGenerator events
blockHaveBeenRemovedError(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
 
BlockId - Class in org.apache.spark.storage
:: DeveloperApi :: Identifies a particular Block of data, usually associated with a single file.
BlockId() - Constructor for class org.apache.spark.storage.BlockId
 
blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
 
blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetLocations
 
blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
 
blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveBlock
 
blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
 
blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
 
blockId() - Method in class org.apache.spark.storage.BlockUpdatedInfo
 
blockId() - Method in interface org.apache.spark.streaming.receiver.ReceivedBlockStoreResult
 
blockIds() - Method in class org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds
 
BlockInfoWrapper - Class in org.apache.spark.storage
 
BlockInfoWrapper(BlockInfo, Lock, Condition) - Constructor for class org.apache.spark.storage.BlockInfoWrapper
 
BlockInfoWrapper(BlockInfo, Lock) - Constructor for class org.apache.spark.storage.BlockInfoWrapper
 
BlockLocationsAndStatus(Seq<BlockManagerId>, BlockStatus, Option<String[]>) - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
 
BlockLocationsAndStatus$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus$
 
blockManager() - Method in class org.apache.spark.SparkEnv
 
blockManagerAddedFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
 
blockManagerAddedToJson(SparkListenerBlockManagerAdded) - Static method in class org.apache.spark.util.JsonProtocol
 
BlockManagerHeartbeat(BlockManagerId) - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
 
BlockManagerHeartbeat$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat$
 
blockManagerId() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
 
blockManagerId() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
 
BlockManagerId - Class in org.apache.spark.storage
:: DeveloperApi :: This class represent a unique identifier for a BlockManager.
BlockManagerId() - Constructor for class org.apache.spark.storage.BlockManagerId
 
blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
 
blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetPeers
 
blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetReplicateInfoForRDDBlocks
 
blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
 
blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
 
blockManagerId() - Method in class org.apache.spark.storage.BlockUpdatedInfo
 
blockManagerIdCache() - Static method in class org.apache.spark.storage.BlockManagerId
The max cache size is hardcoded to 10000, since the size of a BlockManagerId object is about 48B, the total memory cost should be below 1MB which is feasible.
blockManagerIdFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
 
blockManagerIdToJson(BlockManagerId) - Static method in class org.apache.spark.util.JsonProtocol
 
BlockManagerMessages - Class in org.apache.spark.storage
 
BlockManagerMessages() - Constructor for class org.apache.spark.storage.BlockManagerMessages
 
BlockManagerMessages.BlockLocationsAndStatus - Class in org.apache.spark.storage
The response message of GetLocationsAndStatus request.
BlockManagerMessages.BlockLocationsAndStatus$ - Class in org.apache.spark.storage
 
BlockManagerMessages.BlockManagerHeartbeat - Class in org.apache.spark.storage
 
BlockManagerMessages.BlockManagerHeartbeat$ - Class in org.apache.spark.storage
 
BlockManagerMessages.DecommissionBlockManager$ - Class in org.apache.spark.storage
 
BlockManagerMessages.DecommissionBlockManagers - Class in org.apache.spark.storage
 
BlockManagerMessages.DecommissionBlockManagers$ - Class in org.apache.spark.storage
 
BlockManagerMessages.GetBlockStatus - Class in org.apache.spark.storage
 
BlockManagerMessages.GetBlockStatus$ - Class in org.apache.spark.storage
 
BlockManagerMessages.GetExecutorEndpointRef - Class in org.apache.spark.storage
 
BlockManagerMessages.GetExecutorEndpointRef$ - Class in org.apache.spark.storage
 
BlockManagerMessages.GetLocations - Class in org.apache.spark.storage
 
BlockManagerMessages.GetLocations$ - Class in org.apache.spark.storage
 
BlockManagerMessages.GetLocationsAndStatus - Class in org.apache.spark.storage
 
BlockManagerMessages.GetLocationsAndStatus$ - Class in org.apache.spark.storage
 
BlockManagerMessages.GetLocationsMultipleBlockIds - Class in org.apache.spark.storage
 
BlockManagerMessages.GetLocationsMultipleBlockIds$ - Class in org.apache.spark.storage
 
BlockManagerMessages.GetMatchingBlockIds - Class in org.apache.spark.storage
 
BlockManagerMessages.GetMatchingBlockIds$ - Class in org.apache.spark.storage
 
BlockManagerMessages.GetMemoryStatus$ - Class in org.apache.spark.storage
 
BlockManagerMessages.GetPeers - Class in org.apache.spark.storage
 
BlockManagerMessages.GetPeers$ - Class in org.apache.spark.storage
 
BlockManagerMessages.GetReplicateInfoForRDDBlocks - Class in org.apache.spark.storage
 
BlockManagerMessages.GetReplicateInfoForRDDBlocks$ - Class in org.apache.spark.storage
 
BlockManagerMessages.GetShufflePushMergerLocations - Class in org.apache.spark.storage
 
BlockManagerMessages.GetShufflePushMergerLocations$ - Class in org.apache.spark.storage
 
BlockManagerMessages.GetStorageStatus$ - Class in org.apache.spark.storage
 
BlockManagerMessages.IsExecutorAlive - Class in org.apache.spark.storage
 
BlockManagerMessages.IsExecutorAlive$ - Class in org.apache.spark.storage
 
BlockManagerMessages.RegisterBlockManager - Class in org.apache.spark.storage
 
BlockManagerMessages.RegisterBlockManager$ - Class in org.apache.spark.storage
 
BlockManagerMessages.RemoveBlock - Class in org.apache.spark.storage
 
BlockManagerMessages.RemoveBlock$ - Class in org.apache.spark.storage
 
BlockManagerMessages.RemoveBroadcast - Class in org.apache.spark.storage
 
BlockManagerMessages.RemoveBroadcast$ - Class in org.apache.spark.storage
 
BlockManagerMessages.RemoveExecutor - Class in org.apache.spark.storage
 
BlockManagerMessages.RemoveExecutor$ - Class in org.apache.spark.storage
 
BlockManagerMessages.RemoveRdd - Class in org.apache.spark.storage
 
BlockManagerMessages.RemoveRdd$ - Class in org.apache.spark.storage
 
BlockManagerMessages.RemoveShuffle - Class in org.apache.spark.storage
 
BlockManagerMessages.RemoveShuffle$ - Class in org.apache.spark.storage
 
BlockManagerMessages.RemoveShufflePushMergerLocation - Class in org.apache.spark.storage
 
BlockManagerMessages.RemoveShufflePushMergerLocation$ - Class in org.apache.spark.storage
 
BlockManagerMessages.ReplicateBlock - Class in org.apache.spark.storage
 
BlockManagerMessages.ReplicateBlock$ - Class in org.apache.spark.storage
 
BlockManagerMessages.StopBlockManagerMaster$ - Class in org.apache.spark.storage
 
BlockManagerMessages.ToBlockManagerMaster - Interface in org.apache.spark.storage
 
BlockManagerMessages.ToBlockManagerMasterStorageEndpoint - Interface in org.apache.spark.storage
 
BlockManagerMessages.TriggerThreadDump$ - Class in org.apache.spark.storage
Driver to Executor message to trigger a thread dump.
BlockManagerMessages.UpdateBlockInfo - Class in org.apache.spark.storage
 
BlockManagerMessages.UpdateBlockInfo$ - Class in org.apache.spark.storage
 
blockManagerRemovedFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
 
blockManagerRemovedToJson(SparkListenerBlockManagerRemoved) - Static method in class org.apache.spark.util.JsonProtocol
 
BlockMatrix - Class in org.apache.spark.mllib.linalg.distributed
Represents a distributed matrix in blocks of local matrices.
BlockMatrix(RDD<Tuple2<Tuple2<Object, Object>, Matrix>>, int, int, long, long) - Constructor for class org.apache.spark.mllib.linalg.distributed.BlockMatrix
 
BlockMatrix(RDD<Tuple2<Tuple2<Object, Object>, Matrix>>, int, int) - Constructor for class org.apache.spark.mllib.linalg.distributed.BlockMatrix
Alternate constructor for BlockMatrix without the input of the number of rows and columns.
blockName() - Method in class org.apache.spark.status.api.v1.RDDPartitionInfo
 
blockName() - Method in class org.apache.spark.status.LiveRDDPartition
 
blockNotFoundError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
 
BlockNotFoundException - Exception in org.apache.spark.storage
 
BlockNotFoundException(String) - Constructor for exception org.apache.spark.storage.BlockNotFoundException
 
BlockReplicationPolicy - Interface in org.apache.spark.storage
::DeveloperApi:: BlockReplicationPrioritization provides logic for prioritizing a sequence of peers for replicating blocks.
BlockReplicationUtils - Class in org.apache.spark.storage
 
BlockReplicationUtils() - Constructor for class org.apache.spark.storage.BlockReplicationUtils
 
blocks() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
 
blockSize() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
 
blockSize() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
blockSize() - Method in interface org.apache.spark.ml.param.shared.HasBlockSize
Param for block size for stacking input data in matrices.
blockSize() - Method in class org.apache.spark.ml.recommendation.ALS
 
blockSize() - Method in class org.apache.spark.ml.recommendation.ALSModel
 
BlockStatus - Class in org.apache.spark.storage
 
BlockStatus(StorageLevel, long, long) - Constructor for class org.apache.spark.storage.BlockStatus
 
blockStatusFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
 
blockStatusQueryReturnedNullError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
 
blockStatusToJson(BlockStatus) - Static method in class org.apache.spark.util.JsonProtocol
 
blockUpdatedInfo() - Method in class org.apache.spark.scheduler.SparkListenerBlockUpdated
 
BlockUpdatedInfo - Class in org.apache.spark.storage
:: DeveloperApi :: Stores information about a block status in a block manager.
BlockUpdatedInfo(BlockManagerId, BlockId, StorageLevel, long, long) - Constructor for class org.apache.spark.storage.BlockUpdatedInfo
 
blockUpdatedInfoFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
 
blockUpdatedInfoToJson(BlockUpdatedInfo) - Static method in class org.apache.spark.util.JsonProtocol
 
blockUpdateFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
 
blockUpdateToJson(SparkListenerBlockUpdated) - Static method in class org.apache.spark.util.JsonProtocol
 
bloomFilter(String, long, double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
Builds a Bloom filter over a specified column.
bloomFilter(Column, long, double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
Builds a Bloom filter over a specified column.
bloomFilter(String, long, long) - Method in class org.apache.spark.sql.DataFrameStatFunctions
Builds a Bloom filter over a specified column.
bloomFilter(Column, long, long) - Method in class org.apache.spark.sql.DataFrameStatFunctions
Builds a Bloom filter over a specified column.
BloomFilter - Class in org.apache.spark.util.sketch
A Bloom filter is a space-efficient probabilistic data structure that offers an approximate containment test with one-sided error: if it claims that an item is contained in it, this might be in error, but if it claims that an item is not contained in it, then this is definitely true.
BloomFilter() - Constructor for class org.apache.spark.util.sketch.BloomFilter
 
BloomFilter.Version - Enum in org.apache.spark.util.sketch
 
bmAddress() - Method in class org.apache.spark.FetchFailed
 
BOOLEAN() - Static method in class org.apache.spark.sql.Encoders
An encoder for nullable boolean type.
booleanColumn(String[]) - Static method in class org.apache.parquet.filter2.predicate.SparkFilterApi
 
BooleanParam - Class in org.apache.spark.ml.param
Specialized version of Param[Boolean] for Java.
BooleanParam(String, String, String) - Constructor for class org.apache.spark.ml.param.BooleanParam
 
BooleanParam(Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.BooleanParam
 
BooleanType - Class in org.apache.spark.sql.types
The data type representing Boolean values.
BooleanType() - Constructor for class org.apache.spark.sql.types.BooleanType
 
BooleanType - Static variable in class org.apache.spark.sql.types.DataTypes
Gets the BooleanType object.
boost(RDD<org.apache.spark.ml.feature.Instance>, RDD<org.apache.spark.ml.feature.Instance>, BoostingStrategy, boolean, long, String, Option<org.apache.spark.ml.util.Instrumentation>) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
Internal method for performing regression using trees as base learners.
BoostingStrategy - Class in org.apache.spark.mllib.tree.configuration
Configuration options for GradientBoostedTrees.
BoostingStrategy(Strategy, Loss, int, double, double) - Constructor for class org.apache.spark.mllib.tree.configuration.BoostingStrategy
 
bootstrap() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
 
bootstrap() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
 
bootstrap() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
 
bootstrap() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
 
bootstrap() - Method in interface org.apache.spark.ml.tree.RandomForestParams
Whether bootstrap samples are used when building trees.
Both() - Static method in class org.apache.spark.graphx.EdgeDirection
Edges originating from *and* arriving at a vertex of interest.
boundaries() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
Boundaries in increasing order for which predictions are known.
boundaries() - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
 
BoundedDouble - Class in org.apache.spark.partial
A Double value with error bars and associated confidence.
BoundedDouble(double, double, double, double) - Constructor for class org.apache.spark.partial.BoundedDouble
 
BoundFunction - Interface in org.apache.spark.sql.connector.catalog.functions
Represents a function that is bound to an input type.
BreezeUtil - Class in org.apache.spark.ml.ann
In-place DGEMM and DGEMV for Breeze
BreezeUtil() - Constructor for class org.apache.spark.ml.ann.BreezeUtil
 
broadcast(T) - Method in class org.apache.spark.api.java.JavaSparkContext
Broadcast a read-only variable to the cluster, returning a Broadcast object for reading it in distributed functions.
Broadcast<T> - Class in org.apache.spark.broadcast
A broadcast variable.
Broadcast(long, ClassTag<T>) - Constructor for class org.apache.spark.broadcast.Broadcast
 
broadcast(T, ClassTag<T>) - Method in class org.apache.spark.SparkContext
Broadcast a read-only variable to the cluster, returning a Broadcast object for reading it in distributed functions.
broadcast(Dataset<T>) - Static method in class org.apache.spark.sql.functions
Marks a DataFrame as small enough for use in broadcast joins.
BROADCAST() - Static method in class org.apache.spark.storage.BlockId
 
BroadcastBlockId - Class in org.apache.spark.storage
 
BroadcastBlockId(long, String) - Constructor for class org.apache.spark.storage.BroadcastBlockId
 
broadcastCleaned(long) - Method in interface org.apache.spark.CleanerListener
 
BroadcastFactory - Interface in org.apache.spark.broadcast
An interface for all the broadcast implementations in Spark (to allow multiple broadcast implementations).
broadcastId() - Method in class org.apache.spark.CleanBroadcast
 
broadcastId() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
 
broadcastId() - Method in class org.apache.spark.storage.BroadcastBlockId
 
broadcastManager() - Method in class org.apache.spark.SparkEnv
 
bround(Column) - Static method in class org.apache.spark.sql.functions
Returns the value of the column e rounded to 0 decimal places with HALF_EVEN round mode.
bround(Column, int) - Static method in class org.apache.spark.sql.functions
Round the value of e to scale decimal places with HALF_EVEN round mode if scale is greater than or equal to 0 or at integral part when scale is less than 0.
bucket(int, String...) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
Create a bucket transform for one or more columns.
bucket(int, NamedReference[]) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
 
bucket(int, NamedReference[], NamedReference[]) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
 
bucket(Column, Column) - Static method in class org.apache.spark.sql.functions
A transform for any type that partitions by a hash of the input column.
bucket(int, Column) - Static method in class org.apache.spark.sql.functions
A transform for any type that partitions by a hash of the input column.
bucketBy(int, String, String...) - Method in class org.apache.spark.sql.DataFrameWriter
Buckets the output by the given columns.
bucketBy(int, String, Seq<String>) - Method in class org.apache.spark.sql.DataFrameWriter
Buckets the output by the given columns.
bucketByAndSortByUnsupportedByOperationError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
bucketByUnsupportedByOperationError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
BucketedRandomProjectionLSH - Class in org.apache.spark.ml.feature
This BucketedRandomProjectionLSH implements Locality Sensitive Hashing functions for Euclidean distance metrics.
BucketedRandomProjectionLSH(String) - Constructor for class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
BucketedRandomProjectionLSH() - Constructor for class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
BucketedRandomProjectionLSHModel - Class in org.apache.spark.ml.feature
Model produced by BucketedRandomProjectionLSH, where multiple random vectors are stored.
BucketedRandomProjectionLSHParams - Interface in org.apache.spark.ml.feature
bucketingColumnCannotBePartOfPartitionColumnsError(String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
Bucketizer - Class in org.apache.spark.ml.feature
Bucketizer maps a column of continuous features to a column of feature buckets.
Bucketizer(String) - Constructor for class org.apache.spark.ml.feature.Bucketizer
 
Bucketizer() - Constructor for class org.apache.spark.ml.feature.Bucketizer
 
bucketLength() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
bucketLength() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
 
bucketLength() - Method in interface org.apache.spark.ml.feature.BucketedRandomProjectionLSHParams
The length of each hash bucket, a larger bucket lowers the false negative rate.
bucketSortingColumnCannotBePartOfPartitionColumnsError(String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
BucketSpecHelper(BucketSpec) - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.BucketSpecHelper
 
buffer() - Method in class org.apache.spark.storage.memory.SerializedMemoryEntry
 
bufferEncoder() - Method in class org.apache.spark.ml.feature.StringIndexerAggregator
 
bufferEncoder() - Method in class org.apache.spark.sql.expressions.Aggregator
Specifies the Encoder for the intermediate value type.
BufferReleasingInputStream - Class in org.apache.spark.storage
Helper class that ensures a ManagedBuffer is released upon InputStream.close() and also detects stream corruption if streamCompressedOrEncrypted is true
BufferReleasingInputStream(InputStream, ShuffleBlockFetcherIterator, BlockId, int, BlockManagerId, boolean, boolean, Option<CheckedInputStream>) - Constructor for class org.apache.spark.storage.BufferReleasingInputStream
 
bufferSchema() - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
Deprecated.
A StructType represents data types of values in the aggregation buffer.
build(Node, int) - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData$
Create DecisionTreeModelReadWrite.NodeData instances for this node and all children.
build(DecisionTreeModel, int) - Method in class org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData$
Create EnsembleModelReadWrite.EnsembleNodeData instances for the given tree.
build() - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
Builds and returns all combinations of parameters specified by the param grid.
build() - Method in class org.apache.spark.resource.ResourceProfileBuilder
 
build() - Method in interface org.apache.spark.sql.connector.read.ScanBuilder
 
build(Expression) - Method in class org.apache.spark.sql.connector.util.V2ExpressionSQLBuilder
 
build() - Method in interface org.apache.spark.sql.connector.write.RowLevelOperationBuilder
Returns a RowLevelOperation that controls how Spark rewrites data for DELETE, UPDATE, MERGE commands.
build() - Method in interface org.apache.spark.sql.connector.write.WriteBuilder
Returns a logical Write shared between batch and streaming.
build() - Method in class org.apache.spark.sql.types.MetadataBuilder
Builds the Metadata instance.
build() - Method in interface org.apache.spark.storage.memory.MemoryEntryBuilder
 
build() - Method in class org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
Returns the appropriate instance of SparkAWSCredentials given the configured parameters.
builder() - Static method in class org.apache.spark.sql.SparkSession
Creates a SparkSession.Builder for constructing a SparkSession.
Builder() - Constructor for class org.apache.spark.sql.SparkSession.Builder
 
Builder() - Constructor for class org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
 
buildErrorResponse(Response.Status, String) - Static method in class org.apache.spark.ui.UIUtils
 
buildFilter(Seq<Expression>, Seq<Attribute>) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
Builds a function that can be used to filter batches prior to being decompressed.
buildFilter(Seq<Expression>, Seq<Attribute>) - Method in class org.apache.spark.sql.columnar.SimpleMetricsCachedBatchSerializer
 
buildForBatch() - Method in interface org.apache.spark.sql.connector.write.WriteBuilder
Deprecated.
buildForStreaming() - Method in interface org.apache.spark.sql.connector.write.WriteBuilder
Deprecated.
buildLocationMetadata(Seq<Path>, int) - Static method in class org.apache.spark.util.Utils
Convert a sequence of Paths to a metadata string.
buildPools() - Method in interface org.apache.spark.scheduler.SchedulableBuilder
 
buildReaderUnsupportedForFileFormatError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
buildScan(Seq<Attribute>, Seq<Expression>) - Method in interface org.apache.spark.sql.sources.CatalystScan
 
buildScan(String[], Filter[]) - Method in interface org.apache.spark.sql.sources.PrunedFilteredScan
 
buildScan(String[]) - Method in interface org.apache.spark.sql.sources.PrunedScan
 
buildScan() - Method in interface org.apache.spark.sql.sources.TableScan
 
buildTreeFromNodes(DecisionTreeModelReadWrite.NodeData[], String) - Static method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite
Given all data for all nodes in a tree, rebuild the tree.
BYTE() - Static method in class org.apache.spark.api.r.SerializationFormats
 
BYTE() - Static method in class org.apache.spark.sql.Encoders
An encoder for nullable byte type.
BytecodeUtils - Class in org.apache.spark.graphx.util
Includes an utility function to test whether a function accesses a specific attribute of an object.
BytecodeUtils() - Constructor for class org.apache.spark.graphx.util.BytecodeUtils
 
ByteExactNumeric - Class in org.apache.spark.sql.types
 
ByteExactNumeric() - Constructor for class org.apache.spark.sql.types.ByteExactNumeric
 
BYTES_READ() - Method in class org.apache.spark.InternalAccumulator.input$
 
BYTES_WRITTEN() - Method in class org.apache.spark.InternalAccumulator.output$
 
BYTES_WRITTEN() - Method in class org.apache.spark.InternalAccumulator.shuffleWrite$
 
bytesRead() - Method in class org.apache.spark.status.api.v1.InputMetricDistributions
 
bytesRead() - Method in class org.apache.spark.status.api.v1.InputMetrics
 
bytesToString(long) - Static method in class org.apache.spark.util.Utils
Convert a quantity in bytes to a human-readable string such as "4.0 MiB".
bytesToString(BigInt) - Static method in class org.apache.spark.util.Utils
 
byteStringAsBytes(String) - Static method in class org.apache.spark.util.Utils
Convert a passed byte string (e.g.
byteStringAsGb(String) - Static method in class org.apache.spark.util.Utils
Convert a passed byte string (e.g.
byteStringAsKb(String) - Static method in class org.apache.spark.util.Utils
Convert a passed byte string (e.g.
byteStringAsMb(String) - Static method in class org.apache.spark.util.Utils
Convert a passed byte string (e.g.
bytesWritten() - Method in class org.apache.spark.status.api.v1.OutputMetricDistributions
 
bytesWritten() - Method in class org.apache.spark.status.api.v1.OutputMetrics
 
bytesWritten() - Method in class org.apache.spark.status.api.v1.ShuffleWriteMetrics
 
bytesWritten(long) - Method in interface org.apache.spark.util.logging.RollingPolicy
Notify that bytes have been written
ByteType - Class in org.apache.spark.sql.types
The data type representing Byte values.
ByteType() - Constructor for class org.apache.spark.sql.types.ByteType
 
ByteType - Static variable in class org.apache.spark.sql.types.DataTypes
Gets the ByteType object.

C

cache() - Method in class org.apache.spark.api.java.JavaDoubleRDD
Persist this RDD with the default storage level (MEMORY_ONLY).
cache() - Method in class org.apache.spark.api.java.JavaPairRDD
Persist this RDD with the default storage level (MEMORY_ONLY).
cache() - Method in class org.apache.spark.api.java.JavaRDD
Persist this RDD with the default storage level (MEMORY_ONLY).
cache() - Method in class org.apache.spark.graphx.Graph
Caches the vertices and edges associated with this graph at the previously-specified target storage levels, which default to MEMORY_ONLY.
cache() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
Persists the edge partitions using targetStorageLevel, which defaults to MEMORY_ONLY.
cache() - Method in class org.apache.spark.graphx.impl.GraphImpl
 
cache() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
Persists the vertex partitions at targetStorageLevel, which defaults to MEMORY_ONLY.
cache() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
Caches the underlying RDD.
cache() - Method in class org.apache.spark.rdd.RDD
Persist this RDD with the default storage level (MEMORY_ONLY).
cache() - Method in class org.apache.spark.sql.Dataset
Persist this Dataset with the default storage level (MEMORY_AND_DISK).
cache() - Method in class org.apache.spark.streaming.api.java.JavaDStream
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
cache() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
cache() - Method in class org.apache.spark.streaming.dstream.DStream
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
CACHED_PARTITIONS() - Static method in class org.apache.spark.ui.storage.ToolTips
 
CachedBatch - Interface in org.apache.spark.sql.columnar
Basic interface that all cached batches of data must support.
CachedBatchSerializer - Interface in org.apache.spark.sql.columnar
Provides APIs that handle transformations of SQL data associated with the cache/persist APIs.
cacheNodeIds() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
cacheNodeIds() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
 
cacheNodeIds() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
 
cacheNodeIds() - Method in class org.apache.spark.ml.classification.GBTClassifier
 
cacheNodeIds() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
 
cacheNodeIds() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
 
cacheNodeIds() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
cacheNodeIds() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
 
cacheNodeIds() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
 
cacheNodeIds() - Method in class org.apache.spark.ml.regression.GBTRegressor
 
cacheNodeIds() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
 
cacheNodeIds() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
 
cacheNodeIds() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
If false, the algorithm will pass trees to executors to match instances with nodes.
cacheSize() - Method in interface org.apache.spark.SparkExecutorInfo
 
cacheSize() - Method in class org.apache.spark.SparkExecutorInfoImpl
 
cacheTable(String) - Method in class org.apache.spark.sql.catalog.Catalog
Caches the specified table in-memory.
cacheTable(String, StorageLevel) - Method in class org.apache.spark.sql.catalog.Catalog
Caches the specified table with the given storage level.
cacheTable(String) - Method in class org.apache.spark.sql.SQLContext
Caches the specified table in-memory.
calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Entropy
information calculation for multiclass classification
calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Entropy
variance calculation
calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Gini
information calculation for multiclass classification
calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Gini
variance calculation
calculate(double[], double) - Method in interface org.apache.spark.mllib.tree.impurity.Impurity
information calculation for multiclass classification
calculate(double, double, double) - Method in interface org.apache.spark.mllib.tree.impurity.Impurity
information calculation for regression
calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Variance
information calculation for multiclass classification
calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Variance
variance calculation
calculateAmountAndPartsForFraction(double) - Static method in class org.apache.spark.resource.ResourceUtils
 
calculateNumberOfPartitions(long, int, int) - Method in class org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter$
Calculate the number of partitions to use in saving the model.
CalendarInterval - Class in org.apache.spark.unsafe.types
The class representing calendar intervals.
CalendarInterval(int, int, long) - Constructor for class org.apache.spark.unsafe.types.CalendarInterval
 
CalendarIntervalType - Class in org.apache.spark.sql.types
The data type representing calendar intervals.
CalendarIntervalType() - Constructor for class org.apache.spark.sql.types.CalendarIntervalType
 
CalendarIntervalType - Static variable in class org.apache.spark.sql.types.DataTypes
Gets the CalendarIntervalType object.
call(K, Iterator<V1>, Iterator<V2>) - Method in interface org.apache.spark.api.java.function.CoGroupFunction
 
call(T) - Method in interface org.apache.spark.api.java.function.DoubleFlatMapFunction
 
call(T) - Method in interface org.apache.spark.api.java.function.DoubleFunction
 
call(T) - Method in interface org.apache.spark.api.java.function.FilterFunction
 
call(T) - Method in interface org.apache.spark.api.java.function.FlatMapFunction
 
call(T1, T2) - Method in interface org.apache.spark.api.java.function.FlatMapFunction2
 
call(K, Iterator<V>) - Method in interface org.apache.spark.api.java.function.FlatMapGroupsFunction
 
call(K, Iterator<V>, GroupState<S>) - Method in interface org.apache.spark.api.java.function.FlatMapGroupsWithStateFunction
 
call(T) - Method in interface org.apache.spark.api.java.function.ForeachFunction
 
call(Iterator<T>) - Method in interface org.apache.spark.api.java.function.ForeachPartitionFunction
 
call(T1) - Method in interface org.apache.spark.api.java.function.Function
 
call() - Method in interface org.apache.spark.api.java.function.Function0
 
call(T1, T2) - Method in interface org.apache.spark.api.java.function.Function2
 
call(T1, T2, T3) - Method in interface org.apache.spark.api.java.function.Function3
 
call(T1, T2, T3, T4) - Method in interface org.apache.spark.api.java.function.Function4
 
call(T) - Method in interface org.apache.spark.api.java.function.MapFunction
 
call(K, Iterator<V>) - Method in interface org.apache.spark.api.java.function.MapGroupsFunction
 
call(K, Iterator<V>, GroupState<S>) - Method in interface org.apache.spark.api.java.function.MapGroupsWithStateFunction
 
call(Iterator<T>) - Method in interface org.apache.spark.api.java.function.MapPartitionsFunction
 
call(T) - Method in interface org.apache.spark.api.java.function.PairFlatMapFunction
 
call(T) - Method in interface org.apache.spark.api.java.function.PairFunction
 
call(T, T) - Method in interface org.apache.spark.api.java.function.ReduceFunction
 
call(T) - Method in interface org.apache.spark.api.java.function.VoidFunction
 
call(T1, T2) - Method in interface org.apache.spark.api.java.function.VoidFunction2
 
call() - Method in interface org.apache.spark.sql.api.java.UDF0
 
call(T1) - Method in interface org.apache.spark.sql.api.java.UDF1
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10) - Method in interface org.apache.spark.sql.api.java.UDF10
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11) - Method in interface org.apache.spark.sql.api.java.UDF11
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12) - Method in interface org.apache.spark.sql.api.java.UDF12
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13) - Method in interface org.apache.spark.sql.api.java.UDF13
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14) - Method in interface org.apache.spark.sql.api.java.UDF14
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15) - Method in interface org.apache.spark.sql.api.java.UDF15
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16) - Method in interface org.apache.spark.sql.api.java.UDF16
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17) - Method in interface org.apache.spark.sql.api.java.UDF17
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18) - Method in interface org.apache.spark.sql.api.java.UDF18
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19) - Method in interface org.apache.spark.sql.api.java.UDF19
 
call(T1, T2) - Method in interface org.apache.spark.sql.api.java.UDF2
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20) - Method in interface org.apache.spark.sql.api.java.UDF20
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, T21) - Method in interface org.apache.spark.sql.api.java.UDF21
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22) - Method in interface org.apache.spark.sql.api.java.UDF22
 
call(T1, T2, T3) - Method in interface org.apache.spark.sql.api.java.UDF3
 
call(T1, T2, T3, T4) - Method in interface org.apache.spark.sql.api.java.UDF4
 
call(T1, T2, T3, T4, T5) - Method in interface org.apache.spark.sql.api.java.UDF5
 
call(T1, T2, T3, T4, T5, T6) - Method in interface org.apache.spark.sql.api.java.UDF6
 
call(T1, T2, T3, T4, T5, T6, T7) - Method in interface org.apache.spark.sql.api.java.UDF7
 
call(T1, T2, T3, T4, T5, T6, T7, T8) - Method in interface org.apache.spark.sql.api.java.UDF8
 
call(T1, T2, T3, T4, T5, T6, T7, T8, T9) - Method in interface org.apache.spark.sql.api.java.UDF9
 
call_udf(String, Column...) - Static method in class org.apache.spark.sql.functions
Call an user-defined function.
call_udf(String, Seq<Column>) - Static method in class org.apache.spark.sql.functions
Call an user-defined function.
callSite() - Method in class org.apache.spark.storage.RDDInfo
 
callUDF(String, Column...) - Static method in class org.apache.spark.sql.functions
Call an user-defined function.
callUDF(String, Seq<Column>) - Static method in class org.apache.spark.sql.functions
Deprecated.
Use call_udf. Since .
cancel() - Method in class org.apache.spark.ComplexFutureAction
 
cancel() - Method in interface org.apache.spark.FutureAction
Cancels the execution of this action.
cancel() - Method in class org.apache.spark.SimpleFutureAction
 
cancelAllJobs() - Method in class org.apache.spark.api.java.JavaSparkContext
Cancel all jobs that have been scheduled or are running.
cancelAllJobs() - Method in class org.apache.spark.SparkContext
Cancel all jobs that have been scheduled or are running.
cancelJob(int, String) - Method in class org.apache.spark.SparkContext
Cancel a given job if it's scheduled or running.
cancelJob(int) - Method in class org.apache.spark.SparkContext
Cancel a given job if it's scheduled or running.
cancelJobGroup(String) - Method in class org.apache.spark.api.java.JavaSparkContext
Cancel active jobs for the specified group.
cancelJobGroup(String) - Method in class org.apache.spark.SparkContext
Cancel active jobs for the specified group.
cancelStage(int, String) - Method in class org.apache.spark.SparkContext
Cancel a given stage and all jobs associated with it.
cancelStage(int) - Method in class org.apache.spark.SparkContext
Cancel a given stage and all jobs associated with it.
cancelTasks(int, boolean) - Method in interface org.apache.spark.scheduler.TaskScheduler
 
canCreate(String) - Method in interface org.apache.spark.scheduler.ExternalClusterManager
Check if this cluster manager instance can create scheduler components for a certain master URL.
canDeleteWhere(Filter[]) - Method in interface org.apache.spark.sql.connector.catalog.SupportsDelete
Checks whether it is possible to delete data from a data source table that matches filter expressions.
canEqual(Object) - Static method in class org.apache.spark.ExpireDeadHosts
 
canEqual(Object) - Static method in class org.apache.spark.metrics.DirectPoolMemory
 
canEqual(Object) - Static method in class org.apache.spark.metrics.GarbageCollectionMetrics
 
canEqual(Object) - Static method in class org.apache.spark.metrics.JVMHeapMemory
 
canEqual(Object) - Static method in class org.apache.spark.metrics.JVMOffHeapMemory
 
canEqual(Object) - Static method in class org.apache.spark.metrics.MappedPoolMemory
 
canEqual(Object) - Static method in class org.apache.spark.metrics.OffHeapExecutionMemory
 
canEqual(Object) - Static method in class org.apache.spark.metrics.OffHeapStorageMemory
 
canEqual(Object) - Static method in class org.apache.spark.metrics.OffHeapUnifiedMemory
 
canEqual(Object) - Static method in class org.apache.spark.metrics.OnHeapExecutionMemory
 
canEqual(Object) - Static method in class org.apache.spark.metrics.OnHeapStorageMemory
 
canEqual(Object) - Static method in class org.apache.spark.metrics.OnHeapUnifiedMemory
 
canEqual(Object) - Static method in class org.apache.spark.metrics.ProcessTreeMetrics
 
canEqual(Object) - Static method in class org.apache.spark.ml.feature.Dot
 
canEqual(Object) - Static method in class org.apache.spark.ml.feature.EmptyTerm
 
canEqual(Object) - Static method in class org.apache.spark.Resubmitted
 
canEqual(Object) - Static method in class org.apache.spark.scheduler.AllJobsCancelled
 
canEqual(Object) - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
 
canEqual(Object) - Static method in class org.apache.spark.scheduler.JobSucceeded
 
canEqual(Object) - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
 
canEqual(Object) - Static method in class org.apache.spark.scheduler.StopCoordinator
 
canEqual(Object) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
 
canEqual(Object) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
 
canEqual(Object) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
 
canEqual(Object) - Static method in class org.apache.spark.sql.sources.AlwaysFalse
 
canEqual(Object) - Static method in class org.apache.spark.sql.sources.AlwaysTrue
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.BinaryType
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.BooleanType
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.ByteType
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.CalendarIntervalType
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.DateType
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.DoubleType
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.FloatType
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.IntegerType
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.LongType
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.NullType
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.ShortType
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.StringType
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.TimestampType
 
canEqual(Object) - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
 
canEqual(Object) - Static method in class org.apache.spark.StopMapOutputTracker
 
canEqual(Object) - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
 
canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
 
canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
 
canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
 
canEqual(Object) - Static method in class org.apache.spark.Success
 
canEqual(Object) - Static method in class org.apache.spark.TaskResultLost
 
canEqual(Object) - Static method in class org.apache.spark.TaskSchedulerIsSet
 
canEqual(Object) - Static method in class org.apache.spark.UnknownReason
 
canEqual(Object) - Method in class org.apache.spark.util.MutablePair
 
canHandle(String) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
 
canHandle(String) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
 
canHandle(String) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
 
canHandle(String) - Static method in class org.apache.spark.sql.jdbc.H2Dialect
 
canHandle(Driver, Map<String, String>) - Method in class org.apache.spark.sql.jdbc.JdbcConnectionProvider
Checks if this connection provider instance can handle the connection initiated by the driver.
canHandle(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
Check if this dialect instance can handle a certain jdbc url.
canHandle(String) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
 
canHandle(String) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
 
canHandle(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
 
canHandle(String) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
 
canHandle(String) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
 
canHandle(String) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
 
cannotAcquireMemoryToBuildLongHashedRelationError(long, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotAcquireMemoryToBuildUnsafeHashedRelationError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotAddMultiPartitionsOnNonatomicPartitionTableError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotAllocateMemoryToGrowBytesToBytesMapError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotAlterTableWithAlterViewError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotAlterViewWithAlterTableError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotApplyTableValuedFunctionError(String, String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotBroadcastTableOverMaxTableBytesError(long, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotBroadcastTableOverMaxTableRowsError(long, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotBuildHashedRelationLargerThan8GError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotBuildHashedRelationWithUniqueKeysExceededError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotCastError(DataType, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotCastFromNullTypeError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotCastToDateTimeError(Object, DataType, DataType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotChangeDecimalPrecisionError(Decimal, int, int, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotChangeStorageLevelError() - Static method in class org.apache.spark.errors.SparkCoreErrors
 
cannotCleanReservedNamespacePropertyError(String, ParserRuleContext, String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
 
cannotCleanReservedTablePropertyError(String, ParserRuleContext, String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
 
cannotClearOutputDirectoryError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotClearPartitionDirectoryError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotCloneOrCopyReadOnlySQLConfError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotCompareCostWithTargetCostError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotConvertColumnToJSONError(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotConvertDataTypeToParquetTypeError(StructField) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotConvertOrcTimestampNTZToTimestampLTZError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotConvertOrcTimestampToTimestampNTZError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotCreateArrayWithElementsExceedLimitError(long, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotCreateColumnarReaderError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotCreateDatabaseWithSameNameAsPreservedDatabaseError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotCreateJDBCNamespaceUsingProviderError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotCreateJDBCNamespaceWithPropertyError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotCreateJDBCTableUsingLocationError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotCreateJDBCTableUsingProviderError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotCreateJDBCTableWithPartitionsError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotCreateParquetConverterForDataTypeError(DataType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotCreateParquetConverterForDecimalTypeError(DecimalType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotCreateParquetConverterForTypeError(DecimalType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotCreateStagingDirError(String, IOException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotCreateTableWithBothProviderAndSerdeError(Option<String>, Option<SerdeInfo>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotCreateTempViewUsingHiveDataSourceError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotDeleteTableWhereFiltersError(Table, Filter[]) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotDropBuiltinFuncError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotDropDefaultDatabaseError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotDropMultiPartitionsOnNonatomicPartitionTableError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotDropNonemptyDatabaseError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotDropNonemptyNamespaceError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotDropViewWithDropTableError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotEvaluateExpressionError(Expression) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotExecuteStreamingRelationExecError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotFetchTablesOfDatabaseError(String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotFindCatalogToHandleIdentifierError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotFindColumnError(String, String[]) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotFindColumnInRelationOutputError(String, LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotFindConstructorForTypeError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotFindEncoderForTypeError(String, WalkedTypePath) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotFindPartitionColumnInPartitionSchemaError(StructField, StructType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotGenerateCodeForExpressionError(Expression) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotGenerateCodeForUncomparableTypeError(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotGenerateCodeForUnsupportedTypeError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotGetEventTimeWatermarkError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotGetJdbcTypeError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotGetOuterPointerForInnerClassError(Class<?>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotGetSQLConfInSchedulerEventLoopThreadError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotHaveCircularReferencesInBeanClassError(Class<?>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotHaveCircularReferencesInClassError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotInstantiateAbstractCatalogPluginClassError(String, String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotInterpolateClassIntoCodeBlockError(Object) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotLoadClassNotOnClassPathError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotLoadClassWhenRegisteringFunctionError(String, FunctionIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotLoadUserDefinedTypeError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotMergeClassWithOtherClassError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotMergeDecimalTypesWithIncompatiblePrecisionAndScaleError(int, int, int, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotMergeDecimalTypesWithIncompatiblePrecisionError(int, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotMergeDecimalTypesWithIncompatibleScaleError(int, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotMergeIncompatibleDataTypesError(DataType, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotModifyValueOfSparkConfigError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotModifyValueOfStaticConfigError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotMutateReadOnlySQLConfError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotOperateManagedTableWithExistingLocationError(String, TableIdentifier, Path) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotOperateOnHiveDataSourceFilesError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotOverwritePathBeingReadFromError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotOverwriteTableThatIsBeingReadFromError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotParseDecimalError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotParseIntervalError(String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotParseIntervalValueError(String, SqlBaseParser.TypeConstructorContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
 
cannotParseJsonArraysAsStructsError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotParseStatisticAsPercentileError(String, NumberFormatException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotParseStringAsDataTypeError(JsonParser, JsonToken, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotParseStringAsDataTypeError(String, String, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotParseValueTypeError(String, String, SqlBaseParser.TypeConstructorContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
 
cannotPartitionByNestedColumnError(NamedReference) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotPassTypedColumnInUntypedSelectError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotPurgeAsBreakInternalStateError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotReadCorruptedTablePropertyError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotReadFilesError(Throwable, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotReadFooterForFileError(Path, IOException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotReadFooterForFileError(FileStatus, RuntimeException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotRecognizeHiveTypeError(ParseException, String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotRefreshBuiltInFuncError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotRefreshTempFuncError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotRemovePartitionDirError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotRemoveReservedPropertyError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotRenameTableWithAlterViewError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotRenameTempViewToExistingTableError(TableIdentifier, TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotRenameTempViewWithDatabaseSpecifiedError(TableIdentifier, TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotReplaceMissingTableError(Identifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotReplaceMissingTableError(Identifier, Option<Throwable>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotResolveAttributeError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotResolveColumnGivenInputColumnsError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotResolveColumnNameAmongAttributesError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotResolveColumnNameAmongFieldsError(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotResolveStarExpandGivenInputColumnsError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotResolveUserSpecifiedColumnsError(String, TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotResolveWindowReferenceError(String, SqlBaseParser.WindowClauseContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
 
cannotRetrieveTableOrViewNotInSameDatabaseError(Seq<QualifiedTableName>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotRewriteDomainJoinWithConditionsError(Seq<Expression>, DomainJoin) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotRunSubmitMapStageOnZeroPartitionRDDError() - Static method in class org.apache.spark.errors.SparkCoreErrors
 
cannotSafelyMergeSerdePropertiesError(Map<String, String>, Map<String, String>, Set<String>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotSaveBlockOnDecommissionedExecutorError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
 
cannotSaveIntervalIntoExternalStorageError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotSetJDBCNamespaceWithPropertyError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotSetTimeoutDurationError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotSetTimeoutTimestampError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotSpecifyBothJdbcTableNameAndQueryError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotSpecifyDatabaseForTempViewError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotSpecifyWindowFrameError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotTerminateGeneratorError(UnresolvedGenerator) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotTranslateExpressionToSourceFilterError(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotTranslateNonNullValueForFieldError(int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotUnsetJDBCNamespaceWithPropertyError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotUseAllColumnsForPartitionColumnsError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotUseDataTypeForPartitionColumnError(StructField) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotUseIntervalTypeInTableSchemaError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotUseInvalidJavaIdentifierAsFieldNameError(String, WalkedTypePath) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
cannotUseMapSideCombiningWithArrayKeyError() - Static method in class org.apache.spark.errors.SparkCoreErrors
 
cannotUseMixtureOfAggFunctionAndGroupAggPandasUDFError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotUsePreservedDatabaseAsCurrentDatabaseError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotWriteDataToRelationsWithMultiplePathsError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotWriteIncompatibleDataToTableError(String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotWriteNotEnoughColumnsToTableError(String, Seq<Attribute>, LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cannotWriteTooManyColumnsToTableError(String, Seq<Attribute>, LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
canonicalName() - Method in interface org.apache.spark.sql.connector.catalog.functions.BoundFunction
Returns the canonical name of this function, used to determine if functions are equivalent.
CanonicalRandomVertexCut$() - Constructor for class org.apache.spark.graphx.PartitionStrategy.CanonicalRandomVertexCut$
 
canOnlyZipRDDsWithSamePartitionSizeError() - Static method in class org.apache.spark.errors.SparkCoreErrors
 
canWrite(DataType, DataType, boolean, Function2<String, String, Object>, String, Enumeration.Value, Function1<String, BoxedUnit>) - Static method in class org.apache.spark.sql.types.DataType
Returns true if the write data type can be read using the read data type.
capabilities() - Method in interface org.apache.spark.sql.connector.catalog.Table
Returns the set of capabilities for this table.
cardinality() - Method in class org.apache.spark.util.sketch.BloomFilter
 
cartesian(JavaRDDLike<U, ?>) - Method in interface org.apache.spark.api.java.JavaRDDLike
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in this and b is in other.
cartesian(RDD<U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in this and b is in other.
CaseInsensitiveStringMap - Class in org.apache.spark.sql.util
Case-insensitive map of string keys to string values.
CaseInsensitiveStringMap(Map<String, String>) - Constructor for class org.apache.spark.sql.util.CaseInsensitiveStringMap
 
caseSensitive() - Method in class org.apache.spark.ml.feature.StopWordsRemover
Whether to do a case sensitive comparison over the stop words.
cast(DataType) - Method in class org.apache.spark.sql.Column
Casts the column to a different data type.
cast(String) - Method in class org.apache.spark.sql.Column
Casts the column to a different data type, using the canonical string representation of the type.
Cast - Class in org.apache.spark.sql.connector.expressions
Represents a cast expression in the public logical expression API.
Cast(Expression, DataType) - Constructor for class org.apache.spark.sql.connector.expressions.Cast
 
castingCauseOverflowError(Object, DataType, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
castingCauseOverflowErrorInTableInsert(DataType, DataType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
Catalog - Class in org.apache.spark.sql.catalog
Catalog interface for Spark.
Catalog() - Constructor for class org.apache.spark.sql.catalog.Catalog
 
catalog() - Method in class org.apache.spark.sql.SparkSession
 
CatalogAndIdentifier() - Method in interface org.apache.spark.sql.connector.catalog.LookupCatalog
 
CatalogAndIdentifier() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndIdentifier
 
CatalogAndIdentifier$() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndIdentifier$
 
CatalogAndMultipartIdentifier() - Method in interface org.apache.spark.sql.connector.catalog.LookupCatalog
 
CatalogAndNamespace() - Method in interface org.apache.spark.sql.connector.catalog.LookupCatalog
 
CatalogAndNamespace() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndNamespace
 
CatalogAndNamespace$() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndNamespace$
 
CatalogExtension - Interface in org.apache.spark.sql.connector.catalog
An API to extend the Spark built-in session catalog.
catalogFailToCallPublicNoArgConstructorError(String, String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
catalogFailToFindPublicNoArgConstructorError(String, String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
CatalogHelper(CatalogPlugin) - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
 
catalogManager() - Method in interface org.apache.spark.sql.connector.catalog.LookupCatalog
 
CatalogNotFoundException - Exception in org.apache.spark.sql.connector.catalog
 
CatalogNotFoundException(String, Throwable) - Constructor for exception org.apache.spark.sql.connector.catalog.CatalogNotFoundException
 
CatalogNotFoundException(String) - Constructor for exception org.apache.spark.sql.connector.catalog.CatalogNotFoundException
 
CatalogPlugin - Interface in org.apache.spark.sql.connector.catalog
A marker interface to provide a catalog implementation for Spark.
catalogPluginClassNotFoundError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
catalogPluginClassNotFoundForCatalogError(String, String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
catalogPluginClassNotImplementedError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
Catalogs - Class in org.apache.spark.sql.connector.catalog
 
Catalogs() - Constructor for class org.apache.spark.sql.connector.catalog.Catalogs
 
catalogString() - Method in class org.apache.spark.sql.types.ArrayType
 
catalogString() - Static method in class org.apache.spark.sql.types.BinaryType
 
catalogString() - Static method in class org.apache.spark.sql.types.BooleanType
 
catalogString() - Static method in class org.apache.spark.sql.types.ByteType
 
catalogString() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
 
catalogString() - Method in class org.apache.spark.sql.types.DataType
String representation for the type saved in external catalogs.
catalogString() - Static method in class org.apache.spark.sql.types.DateType
 
catalogString() - Static method in class org.apache.spark.sql.types.DoubleType
 
catalogString() - Static method in class org.apache.spark.sql.types.FloatType
 
catalogString() - Static method in class org.apache.spark.sql.types.IntegerType
 
catalogString() - Static method in class org.apache.spark.sql.types.LongType
 
catalogString() - Method in class org.apache.spark.sql.types.MapType
 
catalogString() - Static method in class org.apache.spark.sql.types.NullType
 
catalogString() - Static method in class org.apache.spark.sql.types.ShortType
 
catalogString() - Static method in class org.apache.spark.sql.types.StringType
 
catalogString() - Method in class org.apache.spark.sql.types.StructType
 
catalogString() - Static method in class org.apache.spark.sql.types.TimestampType
 
catalogString() - Method in class org.apache.spark.sql.types.UserDefinedType
 
CatalogV2Implicits - Class in org.apache.spark.sql.connector.catalog
Conversion helpers for working with v2 CatalogPlugin.
CatalogV2Implicits() - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits
 
CatalogV2Implicits.BucketSpecHelper - Class in org.apache.spark.sql.connector.catalog
 
CatalogV2Implicits.CatalogHelper - Class in org.apache.spark.sql.connector.catalog
 
CatalogV2Implicits.FunctionIdentifierHelper - Class in org.apache.spark.sql.connector.catalog
 
CatalogV2Implicits.IdentifierHelper - Class in org.apache.spark.sql.connector.catalog
 
CatalogV2Implicits.MultipartIdentifierHelper - Class in org.apache.spark.sql.connector.catalog
 
CatalogV2Implicits.NamespaceHelper - Class in org.apache.spark.sql.connector.catalog
 
CatalogV2Implicits.PartitionTypeHelper - Class in org.apache.spark.sql.connector.catalog
 
CatalogV2Implicits.TableIdentifierHelper - Class in org.apache.spark.sql.connector.catalog
 
CatalogV2Implicits.TransformHelper - Class in org.apache.spark.sql.connector.catalog
 
CatalogV2Util - Class in org.apache.spark.sql.connector.catalog
 
CatalogV2Util() - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Util
 
CatalystScan - Interface in org.apache.spark.sql.sources
::Experimental:: An interface for experimenting with a more direct connection to the query planner.
Categorical() - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
 
categoricalCols() - Method in class org.apache.spark.ml.feature.FeatureHasher
Numeric columns to treat as categorical features.
categoricalFeaturesInfo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
CategoricalSplit - Class in org.apache.spark.ml.tree
Split which tests a categorical feature.
categories() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
 
categories() - Method in class org.apache.spark.mllib.tree.model.Split
 
categoryMaps() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
 
categorySizes() - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
 
cause() - Method in exception org.apache.spark.sql.AnalysisException
 
cause() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
 
CausedBy - Class in org.apache.spark.util
Extractor Object for pulling out the root cause of an error.
CausedBy() - Constructor for class org.apache.spark.util.CausedBy
 
cbrt(Column) - Static method in class org.apache.spark.sql.functions
Computes the cube-root of the given value.
cbrt(String) - Static method in class org.apache.spark.sql.functions
Computes the cube-root of the given column.
ceil(Column, Column) - Static method in class org.apache.spark.sql.functions
Computes the ceiling of the given value of e to scale decimal places.
ceil(Column) - Static method in class org.apache.spark.sql.functions
Computes the ceiling of the given value of e to 0 decimal places.
ceil(String) - Static method in class org.apache.spark.sql.functions
Computes the ceiling of the given value of e to 0 decimal places.
ceil() - Method in class org.apache.spark.sql.types.Decimal
 
censorCol() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
 
censorCol() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
censorCol() - Method in interface org.apache.spark.ml.regression.AFTSurvivalRegressionParams
Param for censor column name.
chainl1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Function2<T, T, T>>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
 
chainl1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<U>>, Function0<Parsers.Parser<Function2<T, U, T>>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
 
chainr1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Function2<T, U, U>>>, Function2<T, U, U>, U) - Static method in class org.apache.spark.ml.feature.RFormulaParser
 
changePrecision(int, int) - Method in class org.apache.spark.sql.types.Decimal
Update precision and scale while keeping our value the same, and return true if successful.
channel() - Method in interface org.apache.spark.shuffle.api.WritableByteChannelWrapper
The underlying channel to write bytes into.
channelRead0(ChannelHandlerContext, byte[]) - Method in class org.apache.spark.api.r.RBackendAuthHandler
 
charOrVarcharTypeAsStringUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
CharType - Class in org.apache.spark.sql.types
 
CharType(int) - Constructor for class org.apache.spark.sql.types.CharType
 
charTypeMissingLengthError(String, SqlBaseParser.PrimitiveDataTypeContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
 
checkAndGetK8sMasterUrl(String) - Static method in class org.apache.spark.util.Utils
Check the validity of the given Kubernetes master URL and return the resolved URL.
checkColumnNameDuplication(Seq<String>, String, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.util.SchemaUtils
Checks if input column names have duplicate identifiers.
checkColumnNameDuplication(Seq<String>, String, boolean) - Static method in class org.apache.spark.sql.util.SchemaUtils
Checks if input column names have duplicate identifiers.
checkColumnType(StructType, String, DataType, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
Check whether the given schema contains a column of the required data type.
checkColumnTypes(StructType, String, Seq<DataType>, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
Check whether the given schema contains a column of one of the require data types.
checkDataColumns(RFormula, Dataset<?>) - Static method in class org.apache.spark.ml.r.RWrapperUtils
DataFrame column check.
checkedCast() - Method in interface org.apache.spark.ml.recommendation.ALSModelParams
Attempts to safely cast a user/item id to an Int.
checkFileExists(String, Configuration) - Static method in class org.apache.spark.streaming.util.HdfsUtils
Check if the file exists at the given path.
checkHost(String) - Static method in class org.apache.spark.util.Utils
Checks if the host contains only valid hostname/ip without port NOTE: Incase of IPV6 ip it should be enclosed inside []
checkHostPort(String) - Static method in class org.apache.spark.util.Utils
 
checkNumericType(StructType, String, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
Check whether the given schema contains a column of the numeric data type.
checkOffHeapEnabled(SparkConf, long) - Static method in class org.apache.spark.util.Utils
return 0 if MEMORY_OFFHEAP_ENABLED is false.
checkpoint() - Method in interface org.apache.spark.api.java.JavaRDDLike
Mark this RDD for checkpointing.
checkpoint() - Method in class org.apache.spark.graphx.Graph
Mark this Graph for checkpointing.
checkpoint() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
 
checkpoint() - Method in class org.apache.spark.graphx.impl.GraphImpl
 
checkpoint() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
 
checkpoint() - Method in class org.apache.spark.rdd.HadoopRDD
 
checkpoint() - Method in class org.apache.spark.rdd.RDD
Mark this RDD for checkpointing.
checkpoint() - Method in class org.apache.spark.sql.Dataset
Eagerly checkpoint a Dataset and return the new Dataset.
checkpoint(boolean) - Method in class org.apache.spark.sql.Dataset
Returns a checkpointed version of this Dataset.
checkpoint(Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Enable periodic checkpointing of RDDs of this DStream.
checkpoint(String) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
Sets the context to periodically checkpoint the DStream operations for master fault-tolerance.
checkpoint(Duration) - Method in class org.apache.spark.streaming.dstream.DStream
Enable periodic checkpointing of RDDs of this DStream
checkpoint(String) - Method in class org.apache.spark.streaming.StreamingContext
Set the context to periodically checkpoint the DStream operations for driver fault-tolerance.
checkpointCleaned(long) - Method in interface org.apache.spark.CleanerListener
 
checkpointDirectoryHasNotBeenSetInSparkContextError() - Static method in class org.apache.spark.errors.SparkCoreErrors
 
Checkpointed() - Static method in class org.apache.spark.rdd.CheckpointState
 
checkpointFailedToSaveError(int, Path) - Static method in class org.apache.spark.errors.SparkCoreErrors
 
CheckpointingInProgress() - Static method in class org.apache.spark.rdd.CheckpointState
 
checkpointInterval() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
checkpointInterval() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
 
checkpointInterval() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
 
checkpointInterval() - Method in class org.apache.spark.ml.classification.GBTClassifier
 
checkpointInterval() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
 
checkpointInterval() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
 
checkpointInterval() - Method in class org.apache.spark.ml.clustering.LDA
 
checkpointInterval() - Method in class org.apache.spark.ml.clustering.LDAModel
 
checkpointInterval() - Method in interface org.apache.spark.ml.param.shared.HasCheckpointInterval
Param for set checkpoint interval (&gt;= 1) or disable checkpoint (-1).
checkpointInterval() - Method in class org.apache.spark.ml.recommendation.ALS
 
checkpointInterval() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
 
checkpointInterval() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
 
checkpointInterval() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
 
checkpointInterval() - Method in class org.apache.spark.ml.regression.GBTRegressor
 
checkpointInterval() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
 
checkpointInterval() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
 
checkpointInterval() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
 
checkpointLocationNotSpecifiedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
checkpointRDDBlockIdNotFoundError(RDDBlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
 
checkpointRDDHasDifferentNumberOfPartitionsFromOriginalRDDError(int, int, int, int) - Static method in class org.apache.spark.errors.SparkCoreErrors
 
CheckpointReader - Class in org.apache.spark.streaming
 
CheckpointReader() - Constructor for class org.apache.spark.streaming.CheckpointReader
 
CheckpointState - Class in org.apache.spark.rdd
Enumeration to manage state transitions of an RDD through checkpointing
CheckpointState() - Constructor for class org.apache.spark.rdd.CheckpointState
 
checkSchemaColumnNameDuplication(DataType, String, boolean) - Static method in class org.apache.spark.sql.util.SchemaUtils
Checks if an input schema has duplicate column names.
checkSchemaColumnNameDuplication(StructType, String, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.util.SchemaUtils
Checks if an input schema has duplicate column names.
checkSingleVsMultiColumnParams(Params, Seq<Param<?>>, Seq<Param<?>>) - Static method in class org.apache.spark.ml.param.ParamValidators
Utility for Param validity checks for Transformers which have both single- and multi-column support.
checkSpeculatableTasks(long) - Method in interface org.apache.spark.scheduler.Schedulable
 
checkState(boolean, Function0<String>) - Static method in class org.apache.spark.streaming.util.HdfsUtils
 
checkThresholdConsistency() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
If threshold and thresholds are both set, ensures they are consistent.
checkTransformDuplication(Seq<Transform>, String, boolean) - Static method in class org.apache.spark.sql.util.SchemaUtils
Checks if the partitioning transforms are being duplicated or not.
checkUIViewPermissions() - Method in interface org.apache.spark.status.api.v1.BaseAppResource
 
checkUIViewPermissions(String, Option<String>, String) - Method in interface org.apache.spark.status.api.v1.UIRoot
 
child() - Method in class org.apache.spark.sql.connector.expressions.filter.Not
 
child() - Method in class org.apache.spark.sql.sources.Not
 
CHILD_CONNECTION_TIMEOUT - Static variable in class org.apache.spark.launcher.SparkLauncher
Maximum time (in ms) to wait for a child process to connect back to the launcher server when using @link{#start()}.
CHILD_PROCESS_LOGGER_NAME - Static variable in class org.apache.spark.launcher.SparkLauncher
Logger name to use when launching a child process.
ChildFirstURLClassLoader - Class in org.apache.spark.util
A mutable class loader that gives preference to its own URLs over the parent class loader when loading classes and resources.
ChildFirstURLClassLoader(URL[], ClassLoader) - Constructor for class org.apache.spark.util.ChildFirstURLClassLoader
 
children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Avg
 
children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Count
 
children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.CountStar
 
children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.GeneralAggregateFunc
 
children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Max
 
children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Min
 
children() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Sum
 
children() - Method in class org.apache.spark.sql.connector.expressions.Cast
 
children() - Method in interface org.apache.spark.sql.connector.expressions.Expression
Returns an array of the children of this node.
children() - Method in class org.apache.spark.sql.connector.expressions.GeneralScalarExpression
 
children() - Method in interface org.apache.spark.sql.connector.expressions.Literal
 
children() - Method in interface org.apache.spark.sql.connector.expressions.NamedReference
 
children() - Method in interface org.apache.spark.sql.connector.expressions.SortOrder
 
children() - Method in interface org.apache.spark.sql.connector.expressions.Transform
 
chiSqFunc() - Method in class org.apache.spark.mllib.stat.test.ChiSqTest.Method
 
ChiSqSelector - Class in org.apache.spark.ml.feature
Deprecated.
use UnivariateFeatureSelector instead. Since 3.1.1.
ChiSqSelector(String) - Constructor for class org.apache.spark.ml.feature.ChiSqSelector
Deprecated.
 
ChiSqSelector() - Constructor for class org.apache.spark.ml.feature.ChiSqSelector
Deprecated.
 
ChiSqSelector - Class in org.apache.spark.mllib.feature
Creates a ChiSquared feature selector.
ChiSqSelector() - Constructor for class org.apache.spark.mllib.feature.ChiSqSelector
 
ChiSqSelector(int) - Constructor for class org.apache.spark.mllib.feature.ChiSqSelector
The is the same to call this() and setNumTopFeatures(numTopFeatures)
ChiSqSelectorModel - Class in org.apache.spark.ml.feature
Model fitted by ChiSqSelector.
ChiSqSelectorModel - Class in org.apache.spark.mllib.feature
Chi Squared selector model.
ChiSqSelectorModel(int[]) - Constructor for class org.apache.spark.mllib.feature.ChiSqSelectorModel
 
ChiSqSelectorModel.ChiSqSelectorModelWriter - Class in org.apache.spark.ml.feature
 
ChiSqSelectorModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.feature
 
ChiSqSelectorModel.SaveLoadV1_0$.Data - Class in org.apache.spark.mllib.feature
Model data for import/export
ChiSqSelectorModel.SaveLoadV1_0$.Data$ - Class in org.apache.spark.mllib.feature
 
ChiSqSelectorModelWriter(ChiSqSelectorModel) - Constructor for class org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter
 
chiSqTest(Vector, Vector) - Static method in class org.apache.spark.mllib.stat.Statistics
Conduct Pearson's chi-squared goodness of fit test of the observed data against the expected distribution.
chiSqTest(Vector) - Static method in class org.apache.spark.mllib.stat.Statistics
Conduct Pearson's chi-squared goodness of fit test of the observed data against the uniform distribution, with each category having an expected frequency of 1 / observed.size.
chiSqTest(Matrix) - Static method in class org.apache.spark.mllib.stat.Statistics
Conduct Pearson's independence test on the input contingency matrix, which cannot contain negative entries or columns or rows that sum up to 0.
chiSqTest(RDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.stat.Statistics
Conduct Pearson's independence test for every feature against the label across the input RDD.
chiSqTest(JavaRDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.stat.Statistics
Java-friendly version of chiSqTest()
ChiSqTest - Class in org.apache.spark.mllib.stat.test
Conduct the chi-squared test for the input RDDs using the specified method.
ChiSqTest() - Constructor for class org.apache.spark.mllib.stat.test.ChiSqTest
 
ChiSqTest.Method - Class in org.apache.spark.mllib.stat.test
param: name String name for the method.
ChiSqTest.Method$ - Class in org.apache.spark.mllib.stat.test
 
ChiSqTest.NullHypothesis$ - Class in org.apache.spark.mllib.stat.test
 
ChiSqTestResult - Class in org.apache.spark.mllib.stat.test
Object containing the test results for the chi-squared hypothesis test.
chiSquared(Vector, Vector, String) - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
 
chiSquaredFeatures(RDD<LabeledPoint>, String) - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
Conduct Pearson's independence test for each feature against the label across the input RDD.
chiSquaredMatrix(Matrix, String) - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
 
ChiSquareTest - Class in org.apache.spark.ml.stat
Chi-square hypothesis testing for categorical data.
ChiSquareTest() - Constructor for class org.apache.spark.ml.stat.ChiSquareTest
 
chmod700(File) - Static method in class org.apache.spark.util.Utils
JDK equivalent of chmod 700 file.
CholeskyDecomposition - Class in org.apache.spark.mllib.linalg
Compute Cholesky decomposition.
CholeskyDecomposition() - Constructor for class org.apache.spark.mllib.linalg.CholeskyDecomposition
 
chunkId() - Method in class org.apache.spark.storage.ShuffleBlockChunkId
 
cipherStream() - Method in interface org.apache.spark.security.CryptoStreamUtils.BaseErrorHandler
The encrypted stream that may get into an unhealthy state.
classDoesNotImplementUserDefinedAggregateFunctionError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
classForName(String, boolean, boolean) - Static method in class org.apache.spark.util.Utils
Preferred alternative to Class.forName(className), as well as Class.forName(className, initialize, loader) with current thread's ContextClassLoader.
classHasUnexpectedSerializerError(String, Expression) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
Classification() - Static method in class org.apache.spark.mllib.tree.configuration.Algo
 
ClassificationLoss - Interface in org.apache.spark.mllib.tree.loss
 
ClassificationModel<FeaturesType,M extends ClassificationModel<FeaturesType,M>> - Class in org.apache.spark.ml.classification
Model produced by a Classifier.
ClassificationModel() - Constructor for class org.apache.spark.ml.classification.ClassificationModel
 
ClassificationModel - Interface in org.apache.spark.mllib.classification
Represents a classification model that predicts to which of a set of categories an example belongs.
ClassificationSummary - Interface in org.apache.spark.ml.classification
Abstraction for multiclass classification results for a given model.
Classifier<FeaturesType,E extends Classifier<FeaturesType,E,M>,M extends ClassificationModel<FeaturesType,M>> - Class in org.apache.spark.ml.classification
Single-label binary or multiclass classification.
Classifier() - Constructor for class org.apache.spark.ml.classification.Classifier
 
classifier() - Method in class org.apache.spark.ml.classification.OneVsRest
 
classifier() - Method in class org.apache.spark.ml.classification.OneVsRestModel
 
classifier() - Method in interface org.apache.spark.ml.classification.OneVsRestParams
param for the base binary classifier that we reduce multiclass classification into.
ClassifierParams - Interface in org.apache.spark.ml.classification
(private[spark]) Params for classification.
ClassifierTypeTrait - Interface in org.apache.spark.ml.classification
 
classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
 
classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
 
classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.H2Dialect
 
classifyException(String, Throwable) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
Gets a dialect exception, classifies it and wraps it by AnalysisException.
classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
 
classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
 
classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
 
classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
 
classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
 
classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
 
classIsLoadable(String) - Static method in class org.apache.spark.util.Utils
Determines whether the provided class is loadable in the current thread.
className() - Method in class org.apache.spark.ExceptionFailure
 
className() - Static method in class org.apache.spark.ml.linalg.JsonMatrixConverter
Unique class name for identifying JSON object encoded by this class.
className() - Method in class org.apache.spark.sql.catalog.Function
 
classpathEntries() - Method in class org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
 
classTag() - Method in class org.apache.spark.api.java.JavaDoubleRDD
 
classTag() - Method in class org.apache.spark.api.java.JavaPairRDD
 
classTag() - Method in class org.apache.spark.api.java.JavaRDD
 
classTag() - Method in interface org.apache.spark.api.java.JavaRDDLike
 
classTag() - Method in class org.apache.spark.sql.Dataset
 
classTag() - Method in class org.apache.spark.storage.memory.DeserializedMemoryEntry
 
classTag() - Method in interface org.apache.spark.storage.memory.MemoryEntry
 
classTag() - Method in class org.apache.spark.storage.memory.SerializedMemoryEntry
 
classTag() - Method in class org.apache.spark.streaming.api.java.JavaDStream
 
classTag() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
 
classTag() - Method in class org.apache.spark.streaming.api.java.JavaInputDStream
 
classTag() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
 
classTag() - Method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
 
classUnsupportedByMapObjectsError(Class<?>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
classWithoutPublicNonArgumentConstructorError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
clean(long, boolean) - Method in class org.apache.spark.streaming.util.WriteAheadLog
Clean all the records that are older than the threshold time.
clean(Object, boolean, boolean) - Static method in class org.apache.spark.util.ClosureCleaner
Clean the given closure in place.
CleanAccum - Class in org.apache.spark
 
CleanAccum(long) - Constructor for class org.apache.spark.CleanAccum
 
CleanBroadcast - Class in org.apache.spark
 
CleanBroadcast(long) - Constructor for class org.apache.spark.CleanBroadcast
 
CleanCheckpoint - Class in org.apache.spark
 
CleanCheckpoint(int) - Constructor for class org.apache.spark.CleanCheckpoint
 
CleanerListener - Interface in org.apache.spark
Listener class used when any item has been cleaned by the Cleaner class.
cleaning() - Method in class org.apache.spark.status.LiveStage
 
CleanRDD - Class in org.apache.spark
 
CleanRDD(int) - Constructor for class org.apache.spark.CleanRDD
 
CleanShuffle - Class in org.apache.spark
 
CleanShuffle(int) - Constructor for class org.apache.spark.CleanShuffle
 
cleanShuffleDependencies(boolean) - Method in class org.apache.spark.rdd.RDD
Removes an RDD's shuffles and it's non-persisted ancestors.
CleanSparkListener - Class in org.apache.spark
 
CleanSparkListener(SparkListener) - Constructor for class org.apache.spark.CleanSparkListener
 
cleanupApplication() - Method in interface org.apache.spark.shuffle.api.ShuffleDriverComponents
Called once at the end of the Spark application to clean up any existing shuffle state.
cleanupOldBlocks(long) - Method in interface org.apache.spark.streaming.receiver.ReceivedBlockHandler
Cleanup old blocks older than the given threshold time
cleanUpSourceFilesUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
CleanupTask - Interface in org.apache.spark
Classes that represent cleaning tasks.
CleanupTaskWeakReference - Class in org.apache.spark
A WeakReference associated with a CleanupTask.
CleanupTaskWeakReference(CleanupTask, Object, ReferenceQueue<Object>) - Constructor for class org.apache.spark.CleanupTaskWeakReference
 
clear(Param<?>) - Method in interface org.apache.spark.ml.param.Params
Clears the user-supplied value for the input param.
clear() - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
 
clear() - Method in class org.apache.spark.sql.util.ExecutionListenerManager
Removes all the registered QueryExecutionListener.
clear() - Static method in class org.apache.spark.util.AccumulatorContext
Clears all registered AccumulatorV2s.
clearActive() - Static method in class org.apache.spark.sql.SQLContext
Deprecated.
Use SparkSession.clearActiveSession instead. Since 2.0.0.
clearActiveSession() - Static method in class org.apache.spark.sql.SparkSession
Clears the active SparkSession for current thread.
clearCache() - Method in class org.apache.spark.sql.catalog.Catalog
Removes all cached tables from the in-memory cache.
clearCache() - Method in class org.apache.spark.sql.SQLContext
Removes all cached tables from the in-memory cache.
clearCallSite() - Method in class org.apache.spark.api.java.JavaSparkContext
Pass-through to SparkContext.setCallSite.
clearCallSite() - Method in class org.apache.spark.SparkContext
Clear the thread-local property for overriding the call sites of actions and RDDs.
clearDefaultSession() - Static method in class org.apache.spark.sql.SparkSession
Clears the default SparkSession that is returned by the builder.
clearDependencies() - Method in class org.apache.spark.rdd.CoGroupedRDD
 
clearDependencies() - Method in class org.apache.spark.rdd.ShuffledRDD
 
clearDependencies() - Method in class org.apache.spark.rdd.UnionRDD
 
clearExecutorResourceRequests() - Method in class org.apache.spark.resource.ResourceProfileBuilder
 
clearJobGroup() - Method in class org.apache.spark.api.java.JavaSparkContext
Clear the current thread's job group ID and its description.
clearJobGroup() - Method in class org.apache.spark.SparkContext
Clear the current thread's job group ID and its description.
clearTaskResourceRequests() - Method in class org.apache.spark.resource.ResourceProfileBuilder
 
clearThreshold() - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
Clears the threshold so that predict will output raw prediction scores.
clearThreshold() - Method in class org.apache.spark.mllib.classification.SVMModel
Clears the threshold so that predict will output raw prediction scores.
Clock - Interface in org.apache.spark.util
An interface to represent clocks, so that they can be mocked out in unit tests.
CLogLog$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
 
clone() - Method in class org.apache.spark.SparkConf
Copy this object
clone() - Method in class org.apache.spark.sql.ExperimentalMethods
 
clone() - Method in class org.apache.spark.sql.types.Decimal
 
clone() - Method in class org.apache.spark.storage.StorageLevel
 
clone() - Method in class org.apache.spark.util.random.BernoulliCellSampler
 
clone() - Method in class org.apache.spark.util.random.BernoulliSampler
 
clone() - Method in class org.apache.spark.util.random.PoissonSampler
 
clone() - Method in interface org.apache.spark.util.random.RandomSampler
return a copy of the RandomSampler object
clone(T, SerializerInstance, ClassTag<T>) - Static method in class org.apache.spark.util.Utils
Clone an object using a Spark serializer.
cloneComplement() - Method in class org.apache.spark.util.random.BernoulliCellSampler
Return a sampler that is the complement of the range specified of the current sampler.
cloneProperties(Properties) - Static method in class org.apache.spark.util.Utils
Create a new properties object with the same values as `props`
close() - Method in class org.apache.spark.api.java.JavaSparkContext
 
close() - Method in class org.apache.spark.io.NioBufferedFileInputStream
 
close() - Method in class org.apache.spark.io.ReadAheadInputStream
 
close() - Method in interface org.apache.spark.security.CryptoStreamUtils.BaseErrorHandler
 
close() - Method in class org.apache.spark.serializer.DeserializationStream
 
close() - Method in class org.apache.spark.serializer.SerializationStream
 
close(Throwable) - Method in class org.apache.spark.sql.ForeachWriter
Called when stopping to process one partition of new data in the executor side.
close() - Method in class org.apache.spark.sql.SparkSession
Synonym for stop().
close() - Method in class org.apache.spark.sql.vectorized.ArrowColumnVector
 
close() - Method in class org.apache.spark.sql.vectorized.ColumnarBatch
Called to close all the columns in this batch.
close() - Method in class org.apache.spark.sql.vectorized.ColumnVector
Cleans up memory for this column vector.
close() - Method in class org.apache.spark.storage.BufferReleasingInputStream
 
close() - Method in class org.apache.spark.storage.CountingWritableChannel
 
close() - Method in class org.apache.spark.storage.TimeTrackingOutputStream
 
close() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
 
close() - Method in class org.apache.spark.streaming.util.WriteAheadLog
Close this log and release any resources.
ClosureCleaner - Class in org.apache.spark.util
A cleaner that renders closures serializable if they can be done so safely.
ClosureCleaner() - Constructor for class org.apache.spark.util.ClosureCleaner
 
closureSerializer() - Method in class org.apache.spark.SparkEnv
 
cls() - Method in class org.apache.spark.sql.types.ObjectType
 
cls() - Method in class org.apache.spark.util.MethodIdentifier
 
clsTag() - Method in interface org.apache.spark.sql.Encoder
A ClassTag that can be used to construct an Array to contain a collection of T.
cluster() - Method in class org.apache.spark.ml.clustering.ClusteringSummary
 
cluster() - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
 
Cluster$() - Constructor for class org.apache.spark.mllib.clustering.KMeansModel.Cluster$
 
clusterCenter() - Method in class org.apache.spark.ml.clustering.ClusterData
 
clusterCenters() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
 
clusterCenters() - Method in class org.apache.spark.ml.clustering.KMeansModel
 
clusterCenters() - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
Leaf cluster centers.
clusterCenters() - Method in class org.apache.spark.mllib.clustering.KMeansModel
 
clusterCenters() - Method in class org.apache.spark.mllib.clustering.StreamingKMeansModel
 
ClusterData - Class in org.apache.spark.ml.clustering
Helper class for storing model data
ClusterData(int, Vector) - Constructor for class org.apache.spark.ml.clustering.ClusterData
 
clustered(Expression[]) - Static method in class org.apache.spark.sql.connector.distributions.Distributions
Creates a distribution where tuples that share the same values for clustering expressions are co-located in the same partition.
clustered(Expression[]) - Static method in class org.apache.spark.sql.connector.distributions.LogicalDistributions
 
ClusteredDistribution - Interface in org.apache.spark.sql.connector.distributions
A distribution where tuples that share the same values for clustering expressions are co-located in the same partition.
clusterIdx() - Method in class org.apache.spark.ml.clustering.ClusterData
 
clustering() - Method in interface org.apache.spark.sql.connector.distributions.ClusteredDistribution
Returns clustering expressions.
ClusteringEvaluator - Class in org.apache.spark.ml.evaluation
Evaluator for clustering results.
ClusteringEvaluator(String) - Constructor for class org.apache.spark.ml.evaluation.ClusteringEvaluator
 
ClusteringEvaluator() - Constructor for class org.apache.spark.ml.evaluation.ClusteringEvaluator
 
ClusteringMetrics - Class in org.apache.spark.ml.evaluation
Metrics for clustering, which expects two input columns: prediction and label.
ClusteringSummary - Class in org.apache.spark.ml.clustering
Summary of clustering algorithms.
CLUSTERS_CONFIG_PREFIX() - Static method in class org.apache.spark.kafka010.KafkaTokenSparkConf
 
clusterSchedulerError(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
 
clusterSizes() - Method in class org.apache.spark.ml.clustering.ClusteringSummary
 
ClusterStats(Vector, double, double) - Constructor for class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
 
ClusterStats$() - Constructor for class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats$
 
clusterWeights() - Method in class org.apache.spark.mllib.clustering.StreamingKMeansModel
 
cmdOnlyWorksOnPartitionedTablesError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cmdOnlyWorksOnTableWithLocationError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
cn() - Method in class org.apache.spark.mllib.feature.VocabWord
 
coalesce(int) - Method in class org.apache.spark.api.java.JavaDoubleRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int, boolean) - Method in class org.apache.spark.api.java.JavaDoubleRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int) - Method in class org.apache.spark.api.java.JavaPairRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int, boolean) - Method in class org.apache.spark.api.java.JavaPairRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int) - Method in class org.apache.spark.api.java.JavaRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int, boolean) - Method in class org.apache.spark.api.java.JavaRDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int, RDD<?>) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
Runs the packing algorithm and returns an array of PartitionGroups that if possible are load balanced and grouped by locality
coalesce(int, RDD<?>) - Method in interface org.apache.spark.rdd.PartitionCoalescer
Coalesce the partitions of the given RDD.
coalesce(int, boolean, Option<PartitionCoalescer>, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
Return a new RDD that is reduced into numPartitions partitions.
coalesce(int) - Method in class org.apache.spark.sql.Dataset
Returns a new Dataset that has exactly numPartitions partitions, when the fewer partitions are requested.
coalesce(Column...) - Static method in class org.apache.spark.sql.functions
Returns the first column that is not null, or null if all inputs are null.
coalesce(Seq<Column>) - Static method in class org.apache.spark.sql.functions
Returns the first column that is not null, or null if all inputs are null.
CoarseGrainedClusterMessage - Interface in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages
 
CoarseGrainedClusterMessages.AddWebUIFilter - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.AddWebUIFilter$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.DecommissionExecutor$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.DecommissionExecutorsOnHost - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.DecommissionExecutorsOnHost$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.ExecutorDecommissioning - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.ExecutorDecommissioning$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.ExecutorDecommissionSigReceived$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.GetExecutorLossReason - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.GetExecutorLossReason$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.IsExecutorAlive - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.IsExecutorAlive$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.KillExecutors - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.KillExecutors$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.KillExecutorsOnHost - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.KillExecutorsOnHost$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.KillTask - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.KillTask$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.LaunchedExecutor - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.LaunchedExecutor$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.LaunchTask - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.LaunchTask$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.MiscellaneousProcessAdded - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.MiscellaneousProcessAdded$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.RegisterClusterManager - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.RegisterClusterManager$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.RegisterExecutor - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.RegisterExecutor$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.RemoveExecutor - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.RemoveExecutor$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.RemoveWorker - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.RemoveWorker$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.RequestExecutors - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.RequestExecutors$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.RetrieveDelegationTokens$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.RetrieveSparkAppConfig - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.RetrieveSparkAppConfig$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.ReviveOffers$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.SetupDriver - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.SetupDriver$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.ShufflePushCompletion - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.ShufflePushCompletion$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.Shutdown$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.SparkAppConfig - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.SparkAppConfig$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.StatusUpdate - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.StatusUpdate$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.StopDriver$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.StopExecutor$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.StopExecutors$ - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.UpdateDelegationTokens - Class in org.apache.spark.scheduler.cluster
 
CoarseGrainedClusterMessages.UpdateDelegationTokens$ - Class in org.apache.spark.scheduler.cluster
 
code() - Method in class org.apache.spark.mllib.feature.VocabWord
 
CodegenMetrics - Class in org.apache.spark.metrics.source
Metrics for code generation.
CodegenMetrics() - Constructor for class org.apache.spark.metrics.source.CodegenMetrics
 
codeLen() - Method in class org.apache.spark.mllib.feature.VocabWord
 
coefficientMatrix() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
 
coefficients() - Method in class org.apache.spark.ml.classification.LinearSVCModel
 
coefficients() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
A vector of model coefficients for "binomial" logistic regression.
coefficients() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
 
coefficients() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
 
coefficients() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
 
coefficientStandardErrors() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
 
coefficientStandardErrors() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
 
cogroup(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(RDD<Tuple2<K, W>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(RDD<Tuple2<K, W>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other, return a resulting RDD that contains a tuple with the list of values for that key in this as well as other.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
For each key k in this or other1 or other2 or other3, return a resulting RDD that contains a tuple with the list of values for that key in this, other1, other2 and other3.
cogroup(KeyValueGroupedDataset<K, U>, Function3<K, Iterator<V>, Iterator<U>, TraversableOnce<R>>, Encoder<R>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
(Scala-specific) Applies the given function to each cogrouped data.
cogroup(KeyValueGroupedDataset<K, U>, CoGroupFunction<K, V, U, R>, Encoder<R>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
(Java-specific) Applies the given function to each cogrouped data.
cogroup(JavaPairDStream<K, W>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(JavaPairDStream<K, W>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(JavaPairDStream<K, W>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(DStream<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(DStream<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
cogroup(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.
CoGroupedRDD<K> - Class in org.apache.spark.rdd
:: DeveloperApi :: An RDD that cogroups its parents.
CoGroupedRDD(Seq<RDD<? extends Product2<K, ?>>>, Partitioner, ClassTag<K>) - Constructor for class org.apache.spark.rdd.CoGroupedRDD
 
CoGroupFunction<K,V1,V2,R> - Interface in org.apache.spark.api.java.function
A function that returns zero or more output records from each grouping key and its values from 2 Datasets.
col(String) - Method in class org.apache.spark.sql.Dataset
Selects column based on the column name and returns it as a Column.
col(String) - Static method in class org.apache.spark.sql.functions
Returns a Column based on the given column name.
COL_POS_KEY() - Static method in class org.apache.spark.sql.Dataset
 
coldStartStrategy() - Method in class org.apache.spark.ml.recommendation.ALS
 
coldStartStrategy() - Method in class org.apache.spark.ml.recommendation.ALSModel
 
coldStartStrategy() - Method in interface org.apache.spark.ml.recommendation.ALSModelParams
Param for strategy for dealing with unknown or new users/items at prediction time.
colIter() - Method in class org.apache.spark.ml.linalg.DenseMatrix
 
colIter() - Method in interface org.apache.spark.ml.linalg.Matrix
Returns an iterator of column vectors.
colIter() - Method in class org.apache.spark.ml.linalg.SparseMatrix
 
colIter() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
 
colIter() - Method in interface org.apache.spark.mllib.linalg.Matrix
Returns an iterator of column vectors.
colIter() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
 
collect() - Method in interface org.apache.spark.api.java.JavaRDDLike
Return an array that contains all of the elements in this RDD.
collect() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
 
collect() - Method in class org.apache.spark.rdd.RDD
Return an array that contains all of the elements in this RDD.
collect(PartialFunction<T, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
Return an RDD that contains all matching values by applying f.
collect() - Method in class org.apache.spark.sql.Dataset
Returns an array that contains all rows in this Dataset.
collect_list(Column) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns a list of objects with duplicates.
collect_list(String) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns a list of objects with duplicates.
collect_set(Column) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns a set of objects with duplicate elements eliminated.
collect_set(String) - Static method in class org.apache.spark.sql.functions
Aggregate function: returns a set of objects with duplicate elements eliminated.
collectAsList() - Method in class org.apache.spark.sql.Dataset
Returns a Java list that contains all rows in this Dataset.
collectAsMap() - Method in class org.apache.spark.api.java.JavaPairRDD
Return the key-value pairs in this RDD to the master as a Map.
collectAsMap() - Method in class org.apache.spark.rdd.PairRDDFunctions
Return the key-value pairs in this RDD to the master as a Map.
collectAsync() - Method in interface org.apache.spark.api.java.JavaRDDLike
The asynchronous version of collect, which returns a future for retrieving an array containing all of the elements in this RDD.
collectAsync() - Method in class org.apache.spark.rdd.AsyncRDDActions
Returns a future for retrieving all elements of this RDD.
collectEdges(EdgeDirection) - Method in class org.apache.spark.graphx.GraphOps
Returns an RDD that contains for each vertex v its local edges, i.e., the edges that are incident on v, in the user-specified direction.
collectionAccumulator() - Method in class org.apache.spark.SparkContext
Create and register a CollectionAccumulator, which starts with empty list and accumulates inputs by adding them into the list.
collectionAccumulator(String) - Method in class org.apache.spark.SparkContext
Create and register a CollectionAccumulator, which starts with empty list and accumulates inputs by adding them into the list.
CollectionAccumulator<T> - Class in org.apache.spark.util
An accumulator for collecting a list of elements.
CollectionAccumulator() - Constructor for class org.apache.spark.util.CollectionAccumulator
 
CollectionsUtils - Class in org.apache.spark.util
 
CollectionsUtils() - Constructor for class org.apache.spark.util.CollectionsUtils
 
collectNeighborIds(EdgeDirection) - Method in class org.apache.spark.graphx.GraphOps
Collect the neighbor vertex ids for each vertex.
collectNeighbors(EdgeDirection) - Method in class org.apache.spark.graphx.GraphOps
Collect the neighbor vertex attributes for each vertex.
collectPartitions(int[]) - Method in interface org.apache.spark.api.java.JavaRDDLike
Return an array that contains all of the elements in a specific partition of this RDD.
collectSubModels() - Method in interface org.apache.spark.ml.param.shared.HasCollectSubModels
Param for whether to collect a list of sub-models trained during tuning.
collectSubModels() - Method in class org.apache.spark.ml.tuning.CrossValidator
 
collectSubModels() - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
 
colPtrs() - Method in class org.apache.spark.ml.linalg.SparseMatrix
 
colPtrs() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
 
colRegex(String) - Method in class org.apache.spark.sql.Dataset
Selects column based on the column name specified as a regex and returns it as Column.
colsPerBlock() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
 
colStats(RDD<Vector>) - Static method in class org.apache.spark.mllib.stat.Statistics
Computes column-wise summary statistics for the input RDD[Vector].
Column - Class in org.apache.spark.sql.catalog
A column in Spark, as returned by listColumns method in Catalog.
Column(String, String, String, boolean, boolean, boolean) - Constructor for class org.apache.spark.sql.catalog.Column
 
Column - Class in org.apache.spark.sql
A column that will be computed based on the data in a DataFrame.
Column(Expression) - Constructor for class org.apache.spark.sql.Column
 
Column(String) - Constructor for class org.apache.spark.sql.Column
 
column() - Method in class org.apache.spark.sql.connector.catalog.TableChange.After
 
column() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Avg
 
column() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Count
 
column() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Max
 
column() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Min
 
column() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Sum
 
column(String) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
Create a named reference expression for a (nested) column.
column(String) - Static method in class org.apache.spark.sql.functions
Returns a Column based on the given column name.
column(int) - Method in class org.apache.spark.sql.vectorized.ColumnarBatch
Returns the column at `ordinal`.
columnAliasInOperationNotAllowedError(String, SqlBaseParser.TableAliasContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
 
ColumnarArray - Class in org.apache.spark.sql.vectorized
Array abstraction in ColumnVector.
ColumnarArray(ColumnVector, int, int) - Constructor for class org.apache.spark.sql.vectorized.ColumnarArray
 
ColumnarBatch - Class in org.apache.spark.sql.vectorized
This class wraps multiple ColumnVectors as a row-wise table.
ColumnarBatch(ColumnVector[]) - Constructor for class org.apache.spark.sql.vectorized.ColumnarBatch
 
ColumnarBatch(ColumnVector[], int) - Constructor for class org.apache.spark.sql.vectorized.ColumnarBatch
Create a new batch from existing column vectors.
ColumnarBatchRow - Class in org.apache.spark.sql.vectorized
This class wraps an array of ColumnVector and provides a row view.
ColumnarBatchRow(ColumnVector[]) - Constructor for class org.apache.spark.sql.vectorized.ColumnarBatchRow
 
ColumnarMap - Class in org.apache.spark.sql.vectorized
Map abstraction in ColumnVector.
ColumnarMap(ColumnVector, ColumnVector, int, int) - Constructor for class org.apache.spark.sql.vectorized.ColumnarMap
 
ColumnarRow - Class in org.apache.spark.sql.vectorized
Row abstraction in ColumnVector.
ColumnarRow(ColumnVector, int) - Constructor for class org.apache.spark.sql.vectorized.ColumnarRow
 
columnDoesNotExistError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
ColumnIOUtil - Class in org.apache.parquet.io
This is a workaround since methods below are not public in ColumnIO.
ColumnName - Class in org.apache.spark.sql
A convenient class used for constructing schema.
ColumnName(String) - Constructor for class org.apache.spark.sql.ColumnName
 
columnNameContainsInvalidCharactersError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
columnNotDefinedInTableError(String, String, String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
columnNotFoundInExistingColumnsError(String, String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
columnNotFoundInSchemaError(StructField, Option<StructType>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
columnProperties() - Method in class org.apache.spark.sql.connector.catalog.index.TableIndex
 
ColumnPruner - Class in org.apache.spark.ml.feature
Utility transformer for removing temporary columns from a DataFrame.
ColumnPruner(String, Set<String>) - Constructor for class org.apache.spark.ml.feature.ColumnPruner
 
ColumnPruner(Set<String>) - Constructor for class org.apache.spark.ml.feature.ColumnPruner
 
columns() - Method in class org.apache.spark.sql.connector.catalog.index.TableIndex
 
columns() - Method in class org.apache.spark.sql.Dataset
Returns all column names as an array.
columnSchema() - Static method in class org.apache.spark.ml.image.ImageSchema
Schema for the image column: Row(String, Int, Int, Int, Int, Array[Byte])
columnSimilarities() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
Compute all cosine similarities between columns of this matrix using the brute-force approach of computing normalized dot products.
columnSimilarities() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Compute all cosine similarities between columns of this matrix using the brute-force approach of computing normalized dot products.
columnSimilarities(double) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Compute similarities between columns of this matrix using a sampling approach.
columnStatisticsDeserializationNotSupportedError(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
columnStatisticsSerializationNotSupportedError(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
columnsToPrune() - Method in class org.apache.spark.ml.feature.ColumnPruner
 
columnToOldVector(Dataset<?>, String) - Static method in class org.apache.spark.ml.util.DatasetUtils
 
columnToVector(Dataset<?>, String) - Static method in class org.apache.spark.ml.util.DatasetUtils
Cast a column in a Dataset to Vector type.
columnTypeNotSupportStatisticsCollectionError(String, TableIdentifier, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
ColumnVector - Class in org.apache.spark.sql.vectorized
An interface representing in-memory columnar data in Spark.
combinationQueryResultClausesUnsupportedError(SqlBaseParser.QueryOrganizationContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
 
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, Serializer) - Method in class org.apache.spark.api.java.JavaPairRDD
Generic function to combine the elements for each key using a custom set of aggregation functions.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
Generic function to combine the elements for each key using a custom set of aggregation functions.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
Simplified version of combineByKey that hash-partitions the output RDD and uses map-side aggregation.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>) - Method in class org.apache.spark.api.java.JavaPairRDD
Simplified version of combineByKey that hash-partitions the resulting RDD using the existing partitioner/parallelism level and using map-side aggregation.
combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, Serializer) - Method in class org.apache.spark.rdd.PairRDDFunctions
Generic function to combine the elements for each key using a custom set of aggregation functions.
combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
Simplified version of combineByKeyWithClassTag that hash-partitions the output RDD.
combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Simplified version of combineByKeyWithClassTag that hash-partitions the resulting RDD using the existing partitioner/parallelism level.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Combine elements of each key in DStream's RDDs using custom function.
combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Combine elements of each key in DStream's RDDs using custom function.
combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, ClassTag<C>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
Combine elements of each key in DStream's RDDs using custom functions.
combineByKeyWithClassTag(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, Serializer, ClassTag<C>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Generic function to combine the elements for each key using a custom set of aggregation functions.
combineByKeyWithClassTag(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, int, ClassTag<C>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Simplified version of combineByKeyWithClassTag that hash-partitions the output RDD.
combineByKeyWithClassTag(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, ClassTag<C>) - Method in class org.apache.spark.rdd.PairRDDFunctions
Simplified version of combineByKeyWithClassTag that hash-partitions the resulting RDD using the existing partitioner/parallelism level.
combineCombinersByKey(Iterator<? extends Product2<K, C>>, TaskContext) - Method in class org.apache.spark.Aggregator
 
combineValuesByKey(Iterator<? extends Product2<K, V>>, TaskContext) - Method in class org.apache.spark.Aggregator
 
command() - Method in interface org.apache.spark.sql.connector.write.RowLevelOperation
Returns the SQL command that is being performed.
command() - Method in interface org.apache.spark.sql.connector.write.RowLevelOperationInfo
Returns the row-level SQL command (e.g.
commandExecutionInRunnerUnsupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
CommandLineLoggingUtils - Interface in org.apache.spark.util
 
CommandLineUtils - Interface in org.apache.spark.util
Contains basic command line parsing functionality and methods to parse some common Spark CLI options.
commandNotSupportNestedColumnError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
commandUnsupportedInV2TableError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
comment() - Method in interface org.apache.spark.sql.connector.catalog.MetadataColumn
Documentation for this metadata column, or null.
comment() - Method in class org.apache.spark.sql.connector.catalog.TableChange.AddColumn
 
commentOnTableUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
commit(Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
 
commit(Offset) - Method in interface org.apache.spark.sql.connector.read.streaming.SparkDataStream
Informs the source that Spark has completed processing all data for offsets less than or equal to `end` and will only request offsets greater than `end` in the future.
commit(WriterCommitMessage[]) - Method in interface org.apache.spark.sql.connector.write.BatchWrite
Commits this writing job with a list of commit messages.
commit() - Method in interface org.apache.spark.sql.connector.write.DataWriter
Commits this writer after all records are written successfully, returns a commit message which will be sent back to driver side and passed to BatchWrite.commit(WriterCommitMessage[]).
commit(long, WriterCommitMessage[]) - Method in interface org.apache.spark.sql.connector.write.streaming.StreamingWrite
Commits this writing job for the specified epoch with a list of commit messages.
commitAllPartitions(long[]) - Method in interface org.apache.spark.shuffle.api.ShuffleMapOutputWriter
Commits the writes done by all partition writers returned by all calls to this object's ShuffleMapOutputWriter.getPartitionWriter(int), and returns the number of bytes written for each partition.
commitDeniedError(int, long, int, int, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
commitStagedChanges() - Method in interface org.apache.spark.sql.connector.catalog.StagedTable
Finalize the creation or replacement of this table.
commitTask(OutputCommitter, TaskAttemptContext, int, int) - Static method in class org.apache.spark.mapred.SparkHadoopMapRedUtil
Commits a task output.
commitTimeMs() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
 
commonHeaderNodes(HttpServletRequest) - Static method in class org.apache.spark.ui.UIUtils
 
comparator(Schedulable, Schedulable) - Method in interface org.apache.spark.scheduler.SchedulingAlgorithm
 
compare(PartitionGroup, PartitionGroup) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer.partitionGroupOrdering$
 
compare(byte, byte) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
 
compare(Decimal) - Method in class org.apache.spark.sql.types.Decimal
 
compare(Decimal, Decimal) - Method in interface org.apache.spark.sql.types.Decimal.DecimalIsConflicted
 
compare(Decimal, Decimal) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
 
compare(double, double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
 
compare(double, double) - Method in class org.apache.spark.sql.types.DoubleType.DoubleAsIfIntegral$
 
compare(float, float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
 
compare(float, float) - Method in class org.apache.spark.sql.types.FloatType.FloatAsIfIntegral$
 
compare(int, int) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
 
compare(long, long) - Static method in class org.apache.spark.sql.types.LongExactNumeric
 
compare(short, short) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
 
compare(RDDInfo) - Method in class org.apache.spark.storage.RDDInfo
 
compareTo(Object) - Method in class org.apache.spark.sql.util.NumericHistogram.Coord
 
compareTo(SparkShutdownHook) - Method in class org.apache.spark.util.SparkShutdownHook
 
compileAggregate(AggregateFunc) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
 
compileAggregate(AggregateFunc) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
 
compileAggregate(AggregateFunc) - Static method in class org.apache.spark.sql.jdbc.H2Dialect
 
compileAggregate(AggregateFunc) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
Converts aggregate function to String representing a SQL expression.
compileAggregate(AggregateFunc) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
 
compileAggregate(AggregateFunc) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
 
compileAggregate(AggregateFunc) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
 
compileAggregate(AggregateFunc) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
 
compileAggregate(AggregateFunc) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
 
compileAggregate(AggregateFunc) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
 
compileExpression(Expression) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
 
compileExpression(Expression) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
 
compileExpression(Expression) - Static method in class org.apache.spark.sql.jdbc.H2Dialect
 
compileExpression(Expression) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
Converts V2 expression to String representing a SQL expression.
compileExpression(Expression) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
 
compileExpression(Expression) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
 
compileExpression(Expression) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
 
compileExpression(Expression) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
 
compileExpression(Expression) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
 
compileExpression(Expression) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
 
compilerError(CompileException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
compileValue(Object) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
 
compileValue(Object) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
 
compileValue(Object) - Static method in class org.apache.spark.sql.jdbc.H2Dialect
 
compileValue(Object) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
Converts value to SQL expression.
compileValue(Object) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
 
compileValue(Object) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
 
compileValue(Object) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
 
compileValue(Object) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
 
compileValue(Object) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
 
compileValue(Object) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
 
Complete() - Static method in class org.apache.spark.sql.streaming.OutputMode
OutputMode in which all the rows in the streaming DataFrame/Dataset will be written to the sink every time there are some updates.
completed() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
 
completedIndices() - Method in class org.apache.spark.status.LiveJob
 
completedIndices() - Method in class org.apache.spark.status.LiveStage
 
completedStages() - Method in class org.apache.spark.status.LiveJob
 
completedTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
 
completedTasks() - Method in class org.apache.spark.status.LiveJob
 
completedTasks() - Method in class org.apache.spark.status.LiveStage
 
COMPLETION_TIME() - Static method in class org.apache.spark.status.TaskIndexNames
 
completionTime() - Method in class org.apache.spark.scheduler.StageInfo
Time when the stage completed or when the stage was cancelled.
completionTime() - Method in class org.apache.spark.status.api.v1.JobData
 
completionTime() - Method in class org.apache.spark.status.api.v1.StageData
 
completionTime() - Method in class org.apache.spark.status.LiveJob
 
ComplexFutureAction<T> - Class in org.apache.spark
A FutureAction for actions that could trigger multiple Spark jobs.
ComplexFutureAction(Function1<JobSubmitter, Future<T>>) - Constructor for class org.apache.spark.ComplexFutureAction
 
componentName() - Method in class org.apache.spark.resource.ResourceID
 
compositeLimit(ReadLimit[]) - Static method in interface org.apache.spark.sql.connector.read.streaming.ReadLimit
 
CompositeReadLimit - Class in org.apache.spark.sql.connector.read.streaming
/** Represents a ReadLimit where the MicroBatchStream should scan approximately given maximum number of rows with at least the given minimum number of rows.
compressed() - Method in interface org.apache.spark.ml.linalg.Matrix
Returns a matrix in dense column major, dense row major, sparse row major, or sparse column major format, whichever uses less storage.
compressed() - Method in interface org.apache.spark.ml.linalg.Vector
Returns a vector in either dense or sparse format, whichever uses less storage.
compressed() - Method in interface org.apache.spark.mllib.linalg.Vector
Returns a vector in either dense or sparse format, whichever uses less storage.
compressedColMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
Returns a matrix in dense or sparse column major format, whichever uses less storage.
compressedContinuousInputStream(InputStream) - Method in interface org.apache.spark.io.CompressionCodec
 
compressedContinuousInputStream(InputStream) - Method in class org.apache.spark.io.ZStdCompressionCodec
 
compressedContinuousOutputStream(OutputStream) - Method in interface org.apache.spark.io.CompressionCodec
 
compressedInputStream(InputStream) - Method in interface org.apache.spark.io.CompressionCodec
 
compressedInputStream(InputStream) - Method in class org.apache.spark.io.LZ4CompressionCodec
 
compressedInputStream(InputStream) - Method in class org.apache.spark.io.LZFCompressionCodec
 
compressedInputStream(InputStream) - Method in class org.apache.spark.io.SnappyCompressionCodec
 
compressedInputStream(InputStream) - Method in class org.apache.spark.io.ZStdCompressionCodec
 
compressedOutputStream(OutputStream) - Method in interface org.apache.spark.io.CompressionCodec
 
compressedOutputStream(OutputStream) - Method in class org.apache.spark.io.LZ4CompressionCodec
 
compressedOutputStream(OutputStream) - Method in class org.apache.spark.io.LZFCompressionCodec
 
compressedOutputStream(OutputStream) - Method in class org.apache.spark.io.SnappyCompressionCodec
 
compressedOutputStream(OutputStream) - Method in class org.apache.spark.io.ZStdCompressionCodec
 
compressedRowMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
Returns a matrix in dense or sparse row major format, whichever uses less storage.
CompressionCodec - Interface in org.apache.spark.io
:: DeveloperApi :: CompressionCodec allows the customization of choosing different compression implementations to be used in block storage.
compute(Partition, TaskContext) - Method in class org.apache.spark.api.r.BaseRRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.graphx.EdgeRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.graphx.VertexRDD
Provides the RDD[(VertexId, VD)] equivalent output.
compute(Vector, double, Vector) - Method in class org.apache.spark.mllib.optimization.Gradient
Compute the gradient and loss given the features of a single data point.
compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.Gradient
Compute the gradient and loss given the features of a single data point, add the gradient to a provided vector to avoid creating new objects, and return loss.
compute(Vector, double, Vector) - Method in class org.apache.spark.mllib.optimization.HingeGradient
 
compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.HingeGradient
 
compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.L1Updater
 
compute(Vector, double, Vector) - Method in class org.apache.spark.mllib.optimization.LeastSquaresGradient
 
compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.LeastSquaresGradient
 
compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.LogisticGradient
 
compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.SimpleUpdater
 
compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.SquaredL2Updater
 
compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.Updater
Compute an updated value for weights given the gradient, stepSize, iteration number and regularization parameter.
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.CoGroupedRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.HadoopRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.JdbcRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.NewHadoopRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.PartitionPruningRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.RDD
:: DeveloperApi :: Implemented by subclasses to compute a given partition.
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.ShuffledRDD
 
compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.UnionRDD
 
compute(Time) - Method in class org.apache.spark.streaming.api.java.JavaDStream
Generate an RDD for the given duration
compute(Time) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
Method that generates an RDD for the given Duration
compute(Time) - Method in class org.apache.spark.streaming.dstream.ConstantInputDStream
 
compute(Time) - Method in class org.apache.spark.streaming.dstream.DStream
Method that generates an RDD for the given time
compute(Time) - Method in class org.apache.spark.streaming.dstream.ReceiverInputDStream
 
compute(long, long, long, long) - Method in interface org.apache.spark.streaming.scheduler.rate.RateEstimator
Computes the number of records the stream attached to this RateEstimator should ingest per second, given an update on the size and completion times of the latest batch.
computeClusterStats(Dataset<Row>, String, String, String) - Static method in class org.apache.spark.ml.evaluation.CosineSilhouette
The method takes the input dataset and computes the aggregated values about a cluster which are needed by the algorithm.
computeClusterStats(Dataset<Row>, String, String, String) - Static method in class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
The method takes the input dataset and computes the aggregated values about a cluster which are needed by the algorithm.
computeColumnSummaryStatistics() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes column-wise summary statistics.
computeCorrelation(RDD<Object>, RDD<Object>) - Method in interface org.apache.spark.mllib.stat.correlation.Correlation
Compute correlation for two datasets.
computeCorrelation(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
Compute the Pearson correlation for two datasets.
computeCorrelation(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
Compute Spearman's correlation for two datasets.
computeCorrelationMatrix(RDD<Vector>) - Method in interface org.apache.spark.mllib.stat.correlation.Correlation
Compute the correlation matrix S, for the input matrix, where S(i, j) is the correlation between column i and j.
computeCorrelationMatrix(RDD<Vector>) - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
Compute the Pearson correlation matrix S, for the input matrix, where S(i, j) is the correlation between column i and j.
computeCorrelationMatrix(RDD<Vector>) - Static method in class org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
Compute Spearman's correlation matrix S, for the input matrix, where S(i, j) is the correlation between column i and j.
computeCorrelationMatrixFromCovariance(Matrix) - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
Compute the Pearson correlation matrix from the covariance matrix.
computeCorrelationWithMatrixImpl(RDD<Object>, RDD<Object>) - Method in interface org.apache.spark.mllib.stat.correlation.Correlation
Combine the two input RDD[Double]s into an RDD[Vector] and compute the correlation using the correlation implementation for RDD[Vector].
computeCorrelationWithMatrixImpl(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
 
computeCorrelationWithMatrixImpl(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
 
computeCost(Dataset<?>) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
Deprecated.
This method is deprecated and will be removed in future versions. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary.
computeCost(Vector) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
Computes the squared distance between the input point and the cluster center it belongs to.
computeCost(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
Computes the sum of squared distances between the input points and their corresponding cluster centers.
computeCost(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
Java-friendly version of computeCost().
computeCost(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.KMeansModel
Return the K-means cost (sum of squared distances of points to their nearest center) for this model on the given data.
computeCovariance() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes the covariance matrix, treating each row as an observation.
computeError(org.apache.spark.mllib.tree.model.TreeEnsembleModel, RDD<LabeledPoint>) - Method in interface org.apache.spark.mllib.tree.loss.Loss
Method to calculate error of the base learner for the gradient boosting calculation.
computeError(double, double) - Method in interface org.apache.spark.mllib.tree.loss.Loss
Method to calculate loss when the predictions are already known.
computeFractionForSampleSize(int, long, boolean) - Static method in class org.apache.spark.util.random.SamplingUtils
Returns a sampling rate that guarantees a sample of size greater than or equal to sampleSizeLowerBound 99.99% of the time.
computeGradient(DenseMatrix<Object>, DenseMatrix<Object>, Vector, int) - Method in interface org.apache.spark.ml.ann.TopologyModel
Computes gradient for the network
computeGramianMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
Computes the Gramian matrix A^T A.
computeGramianMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes the Gramian matrix A^T A.
computeInitialPredictionAndError(RDD<TreePoint>, double, DecisionTreeRegressionModel, Loss, Broadcast<Split[][]>) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
Compute the initial predictions and errors for a dataset for the first iteration of gradient boosting.
computeInitialPredictionAndError(RDD<LabeledPoint>, double, DecisionTreeModel, Loss) - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
Compute the initial predictions and errors for a dataset for the first iteration of gradient boosting.
computePreferredLocations(Seq<InputFormatInfo>) - Static method in class org.apache.spark.scheduler.InputFormatInfo
Computes the preferred locations based on input(s) and returned a location to block map.
computePrevDelta(DenseMatrix<Object>, DenseMatrix<Object>, DenseMatrix<Object>) - Method in interface org.apache.spark.ml.ann.LayerModel
Computes the delta for back propagation.
computePrincipalComponents(int) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes the top k principal components only.
computePrincipalComponentsAndExplainedVariance(int) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes the top k principal components and a vector of proportions of variance explained by each principal component.
computeProbability(double) - Method in interface org.apache.spark.mllib.tree.loss.ClassificationLoss
Computes the class probability given the margin.
computeSilhouetteCoefficient(Broadcast<Map<Object, Tuple2<Vector, Object>>>, Vector, double, double) - Static method in class org.apache.spark.ml.evaluation.CosineSilhouette
It computes the Silhouette coefficient for a point.
computeSilhouetteCoefficient(Broadcast<Map<Object, SquaredEuclideanSilhouette.ClusterStats>>, Vector, double, double, double) - Static method in class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
It computes the Silhouette coefficient for a point.
computeSilhouetteScore(Dataset<?>, String, String, String) - Static method in class org.apache.spark.ml.evaluation.CosineSilhouette
Compute the Silhouette score of the dataset using the cosine distance measure.
computeSilhouetteScore(Dataset<?>, String, String, String) - Static method in class org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette
Compute the Silhouette score of the dataset using squared Euclidean distance measure.
computeStatisticsNotExpectedError(SqlBaseParser.IdentifierContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
 
computeSVD(int, boolean, double) - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
Computes the singular value decomposition of this IndexedRowMatrix.
computeSVD(int, boolean, double) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
Computes singular value decomposition of this matrix.
computeThresholdByKey(Map<K, AcceptanceResult>, Map<K, Object>) - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
Given the result returned by getCounts, determine the threshold for accepting items to generate exact sample size.
computeWeightedError(RDD<org.apache.spark.ml.feature.Instance>, DecisionTreeRegressionModel[], double[], Loss) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
Method to calculate error of the base learner for the gradient boosting calculation.
computeWeightedError(RDD<TreePoint>, RDD<Tuple2<Object, Object>>) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
Method to calculate error of the base learner for the gradient boosting calculation.
concat(Column...) - Static method in class org.apache.spark.sql.functions
Concatenates multiple input columns together into a single column.
concat(Seq<Column>) - Static method in class org.apache.spark.sql.functions
Concatenates multiple input columns together into a single column.
concat_ws(String, Column...) - Static method in class org.apache.spark.sql.functions
Concatenates multiple input string columns together into a single string column, using the given separator.
concat_ws(String, Seq<Column>) - Static method in class org.apache.spark.sql.functions
Concatenates multiple input string columns together into a single string column, using the given separator.
concatArraysWithElementsExceedLimitError(long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
concurrentModificationOnExternalAppendOnlyUnsafeRowArrayError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
concurrentQueryInstanceError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
conf() - Method in interface org.apache.spark.api.plugin.PluginContext
Configuration of the Spark application.
Conf(int, int, double, double, double, double, double, double) - Constructor for class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
 
conf() - Method in class org.apache.spark.SparkEnv
 
conf() - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
 
conf() - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
 
conf() - Method in class org.apache.spark.sql.SparkSession
 
confidence() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
Returns the confidence of the rule.
confidence() - Method in class org.apache.spark.partial.BoundedDouble
 
confidence() - Method in class org.apache.spark.util.sketch.CountMinSketch
Returns the confidence (or delta) of this CountMinSketch.
config(String, String) - Method in class org.apache.spark.sql.SparkSession.Builder
Sets a config option.
config(String, long) - Method in class org.apache.spark.sql.SparkSession.Builder
Sets a config option.
config(String, double) - Method in class org.apache.spark.sql.SparkSession.Builder
Sets a config option.
config(String, boolean) - Method in class org.apache.spark.sql.SparkSession.Builder
Sets a config option.
config(SparkConf) - Method in class org.apache.spark.sql.SparkSession.Builder
Sets a list of config options based on the given SparkConf.
configRemovedInVersionError(String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
configTestLog4j2(String) - Static method in class org.apache.spark.TestUtils
config a log4j2 properties used for testsuite
Configurable - Interface in org.apache.spark.input
A trait to implement Configurable interface.
configuration() - Method in class org.apache.spark.scheduler.InputFormatInfo
 
CONFIGURATION_INSTANTIATION_LOCK() - Static method in class org.apache.spark.rdd.HadoopRDD
Configuration's constructor is not threadsafe (see SPARK-1097 and HADOOP-10456).
CONFIGURATION_INSTANTIATION_LOCK() - Static method in class org.apache.spark.rdd.NewHadoopRDD
Configuration's constructor is not threadsafe (see SPARK-1097 and HADOOP-10456).
conflictingAttributesInJoinConditionError(AttributeSet, LogicalPlan, LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
 
confusionMatrix() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
Returns confusion matrix: predicted classes are in columns, they are ordered by class label ascending, as in "labels"
connectedComponents() - Method in class org.apache.spark.graphx.GraphOps
Compute the connected component membership of each vertex and return a graph with the vertex value containing the lowest vertex id in the connected component containing that vertex.
connectedComponents(int) - Method in class org.apache.spark.graphx.GraphOps
Compute the connected component membership of each vertex and return a graph with the vertex value containing the lowest vertex id in the connected component containing that vertex.
ConnectedComponents - Class in org.apache.spark.graphx.lib
Connected components algorithm.
ConnectedComponents() - Constructor for class org.apache.spark.graphx.lib.ConnectedComponents
 
consequent() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
 
ConstantInputDStream<T> - Class in org.apache.spark.streaming.dstream
An input stream that always returns the same RDD on each time step.
ConstantInputDStream(StreamingContext, RDD<T>, ClassTag<T>) - Constructor for class org.apache.spark.streaming.dstream.ConstantInputDStream
 
constructorNotFoundError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
constructTree(org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData[]) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
Given a list of nodes from a tree, construct the tree.
constructTrees(RDD<org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData>) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
 
contains(Param<?>) - Method in class org.apache.spark.ml.param.ParamMap
Checks whether a parameter is explicitly specified.
contains(String) - Method in class org.apache.spark.SparkConf
Does the configuration contain a given parameter?
contains(Object) - Method in class org.apache.spark.sql.Column
Contains the other element.
contains(String) - Method in class org.apache.spark.sql.types.Metadata
Tests whether this Metadata contains a binding for a key.
contains(T) - Method in class org.apache.spark.sql.util.SQLOpenHashSet
 
containsKey(Object) - Method in class org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
 
containsKey(Object) - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
 
containsNaN() - Method in class org.apache.spark.sql.util.SQLOpenHashSet
 
containsNull() - Method in class org.apache.spark.sql.types.ArrayType
 
containsNull() - Method in class org.apache.spark.sql.util.SQLOpenHashSet
 
containsValue(Object) - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
 
contentType() - Method in class org.apache.spark.ui.JettyUtils.ServletParams
 
context() - Method in interface org.apache.spark.api.java.JavaRDDLike
The SparkContext that this RDD was created on.
context() - Method in class org.apache.spark.ContextAwareIterator
 
context() - Method in class org.apache.spark.InterruptibleIterator
 
context() - Method in class org.apache.spark.rdd.RDD
The SparkContext that this RDD was created on.
context() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
Return the StreamingContext associated with this DStream
context() - Method in class org.apache.spark.streaming.dstream.DStream
Return the StreamingContext associated with this DStream
ContextAwareIterator<T> - Class in org.apache.spark
:: DeveloperApi :: A TaskContext aware iterator.
ContextAwareIterator(TaskContext, Iterator<T>) - Constructor for class org.apache.spark.ContextAwareIterator
 
ContextBarrierId - Class in org.apache.spark
For each barrier stage attempt, only at most one barrier() call can be active at any time, thus we can use (stageId, stageAttemptId) to identify the stage attempt where the barrier() call is from.
ContextBarrierId(int, int) - Constructor for class org.apache.spark.ContextBarrierId
 
Continuous() - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
 
Continuous(long) - Static method in class org.apache.spark.sql.streaming.Trigger
A trigger that continuously processes streaming data, asynchronously checkpointing at the specified interval.
Continuous(long, TimeUnit) - Static method in class org.apache.spark.sql.streaming.Trigger
A trigger that continuously processes streaming data, asynchronously checkpointing at the specified interval.
Continuous(Duration) - Static method in class org.apache.spark.sql.streaming.Trigger
(Scala-friendly) A trigger that continuously processes streaming data, asynchronously checkpointing at the specified interval.
Continuous(String) - Static method in class org.apache.spark.sql.streaming.Trigger
A trigger that continuously processes streaming data, asynchronously checkpointing at the specified interval.
ContinuousPartitionReader<T> - Interface in org.apache.spark.sql.connector.read.streaming
A variation on PartitionReader for use with continuous streaming processing.
ContinuousPartitionReaderFactory - Interface in org.apache.spark.sql.connector.read.streaming
A variation on PartitionReaderFactory that returns ContinuousPartitionReader instead of PartitionReader.
continuousProcessingUnsupportedByDataSourceError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
ContinuousSplit - Class in org.apache.spark.ml.tree
Split which tests a continuous feature.
ContinuousStream - Interface in org.apache.spark.sql.connector.read.streaming
A SparkDataStream for streaming queries with continuous mode.
conv(Column, int, int) - Static method in class org.apache.spark.sql.functions
Convert a number in a string column from one base to another.
convertCachedBatchToColumnarBatch(RDD<CachedBatch>, Seq<Attribute>, Seq<Attribute>, SQLConf) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
Convert the cached data into a ColumnarBatch.
convertCachedBatchToInternalRow(RDD<CachedBatch>, Seq<Attribute>, Seq<Attribute>, SQLConf) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
Convert the cached batch into InternalRows.
convertColumnarBatchToCachedBatch(RDD<ColumnarBatch>, Seq<Attribute>, StorageLevel, SQLConf) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
Convert an RDD[ColumnarBatch] into an RDD[CachedBatch] in preparation for caching the data.
convertHiveTableToCatalogTableError(SparkException, String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
 
convertInternalRowToCachedBatch(RDD<InternalRow>, Seq<Attribute>, StorageLevel, SQLConf) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
Convert an RDD[InternalRow] into an RDD[CachedBatch] in preparation for caching the data.
convertMatrixColumnsFromML(Dataset<?>, String...) - Static method in class org.apache.spark.mllib.util.MLUtils
Converts matrix columns in an input Dataset to the Matrix type from the new Matrix type under the spark.ml package.
convertMatrixColumnsFromML(Dataset<?>, Seq<String>) - Static method in class org.apache.spark.mllib.util.MLUtils
Converts matrix columns in an input Dataset to the Matrix type from the new Matrix type under the spark.ml package.
convertMatrixColumnsToML(Dataset<?>, String...) - Static method in class org.apache.spark.mllib.util.MLUtils
Converts Matrix columns in an input Dataset from the Matrix type to the new Matrix type under the spark.ml package.
convertMatrixColumnsToML(Dataset<?>, Seq<String>) - Static method in class org.apache.spark.mllib.util.MLUtils
Converts Matrix columns in an input Dataset from the Matrix type to the new Matrix type under the spark.ml package.
convertTableProperties(TableSpec) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
 
convertToCanonicalEdges(Function2<ED, ED, ED>) - Method in class org.apache.spark.graphx.GraphOps
Convert bi-directional edges into uni-directional ones.
convertToOldLossType(String) - Method in interface org.apache.spark.ml.tree.GBTRegressorParams
 
convertToTimeUnit(long, TimeUnit) - Static method in class org.apache.spark.streaming.ui.UIUtils
Convert milliseconds to the specified unit.
convertTransforms() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.TransformHelper
 
convertVectorColumnsFromML(Dataset<?>, String...) - Static method in class org.apache.spark.mllib.util.MLUtils
Converts vector columns in an input Dataset to the Vector type from the new Vector type under the spark.ml package.
convertVectorColumnsFromML(Dataset<?>, Seq<String>) - Static method in class org.apache.spark.mllib.util.MLUtils
Converts vector columns in an input Dataset to the Vector type from the new Vector type under the spark.ml package.
convertVectorColumnsToML(Dataset<?>, String...) - Static method in class org.apache.spark.mllib.util.MLUtils
Converts vector columns in an input Dataset from the Vector type to the new Vector type under the spark.ml package.
convertVectorColumnsToML(Dataset<?>, Seq<String>) - Static method in class org.apache.spark.mllib.util.MLUtils
Converts vector columns in an input Dataset from the Vector type to the new Vector type under the spark.ml package.
Coord() - Constructor for class org.apache.spark.sql.util.NumericHistogram.Coord
 
CoordinateMatrix - Class in org.apache.spark.mllib.linalg.distributed
Represents a matrix in coordinate format.
CoordinateMatrix(RDD<MatrixEntry>, long, long) - Constructor for class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
 
CoordinateMatrix(RDD<MatrixEntry>) - Constructor for class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
Alternative constructor leaving matrix dimensions to be determined automatically.
copy(ParamMap) - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.FMClassificationModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.FMClassifier
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.GBTClassificationModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.GBTClassifier
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.LinearSVC
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.LinearSVCModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.LogisticRegression
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.NaiveBayes
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.NaiveBayesModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.OneVsRest
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.OneVsRestModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
 
copy(ParamMap) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
 
copy(ParamMap) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.clustering.GaussianMixture
 
copy(ParamMap) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.clustering.KMeans
 
copy(ParamMap) - Method in class org.apache.spark.ml.clustering.KMeansModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.clustering.LDA
 
copy(ParamMap) - Method in class org.apache.spark.ml.clustering.LocalLDAModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
 
copy(ParamMap) - Method in class org.apache.spark.ml.Estimator
 
copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
 
copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.ClusteringEvaluator
 
copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.Evaluator
 
copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
 
copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.MultilabelClassificationEvaluator
 
copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.RankingEvaluator
 
copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.Binarizer
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.Bucketizer
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.ChiSqSelector
Deprecated.
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.ColumnPruner
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.CountVectorizer
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.FeatureHasher
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.HashingTF
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.IDF
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.IDFModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.Imputer
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.ImputerModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.IndexToString
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.Interaction
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.MaxAbsScaler
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.MinHashLSH
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.MinHashLSHModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.MinMaxScaler
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.OneHotEncoder
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.PCA
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.PCAModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.PolynomialExpansion
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.RegexTokenizer
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.RFormula
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.RFormulaModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.RobustScaler
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.RobustScalerModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.SQLTransformer
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.StandardScaler
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.StandardScalerModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.StopWordsRemover
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.StringIndexer
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.StringIndexerModel
 
copy(ParamMap) - Method in class org.apache.spark.ml.feature.Tokenizer
 
copy(ParamMap) - Method in class org.apache.spark.ml.