- abort(Throwable) - Method in interface org.apache.spark.shuffle.api.ShuffleMapOutputWriter
-
- abs(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the absolute value of a numeric value.
- abs(T) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
-
- abs() - Method in class org.apache.spark.sql.types.Decimal
-
- abs(T) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
-
- abs(double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
-
- abs(double) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleIsConflicted
-
- abs(float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
-
- abs(float) - Method in interface org.apache.spark.sql.types.FloatType.FloatIsConflicted
-
- abs(T) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
-
- abs(T) - Static method in class org.apache.spark.sql.types.LongExactNumeric
-
- abs(T) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
-
- absent() - Static method in class org.apache.spark.api.java.Optional
-
- AbsoluteError - Class in org.apache.spark.mllib.tree.loss
-
Class for absolute error loss calculation (for regression).
- AbsoluteError() - Constructor for class org.apache.spark.mllib.tree.loss.AbsoluteError
-
- AbstractLauncher<T extends AbstractLauncher<T>> - Class in org.apache.spark.launcher
-
Base class for launcher implementations.
- accept(Parsers) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- accept(ES, Function1<ES, List<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- accept(String, PartialFunction<Object, U>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- accept(Path) - Method in class org.apache.spark.ml.image.SamplePathFilter
-
- acceptIf(Function1<Object, Object>, Function1<Object, String>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- acceptMatch(String, PartialFunction<Object, U>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- acceptSeq(ES, Function1<ES, Iterable<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- acceptsType(DataType) - Method in class org.apache.spark.sql.types.ObjectType
-
- accessNonExistentAccumulatorError(long) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- accId() - Method in class org.apache.spark.CleanAccum
-
- accumCleaned(long) - Method in interface org.apache.spark.CleanerListener
-
- AccumulableInfo - Class in org.apache.spark.scheduler
-
:: DeveloperApi ::
Information about an
AccumulatorV2
modified during a task or stage.
- AccumulableInfo - Class in org.apache.spark.status.api.v1
-
- accumulableInfoFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- AccumulableInfoSerializer - Class in org.apache.spark.status.protobuf
-
- AccumulableInfoSerializer() - Constructor for class org.apache.spark.status.protobuf.AccumulableInfoSerializer
-
- accumulableInfoToJson(AccumulableInfo, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- accumulables() - Method in class org.apache.spark.scheduler.StageInfo
-
Terminal values of accumulables updated during this stage, including all the user-defined
accumulators.
- accumulables() - Method in class org.apache.spark.scheduler.TaskInfo
-
Intermediate updates to accumulables during this task.
- accumulablesToJson(Iterable<AccumulableInfo>, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- ACCUMULATOR_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
-
- ACCUMULATOR_UPDATES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
- ACCUMULATOR_UPDATES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
- ACCUMULATOR_UPDATES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
- AccumulatorContext - Class in org.apache.spark.util
-
An internal class used to track accumulators by Spark itself.
- AccumulatorContext() - Constructor for class org.apache.spark.util.AccumulatorContext
-
- ACCUMULATORS() - Static method in class org.apache.spark.status.TaskIndexNames
-
- accumulatorUpdates() - Method in class org.apache.spark.status.api.v1.StageData
-
- accumulatorUpdates() - Method in class org.apache.spark.status.api.v1.TaskData
-
- AccumulatorV2<IN,OUT> - Class in org.apache.spark.util
-
The base class for accumulators, that can accumulate inputs of type IN
, and produce output of
type OUT
.
- AccumulatorV2() - Constructor for class org.apache.spark.util.AccumulatorV2
-
- accumUpdates() - Method in class org.apache.spark.ExceptionFailure
-
- accumUpdates() - Method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
-
- accumUpdates() - Method in class org.apache.spark.TaskKilled
-
- accuracy() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns accuracy.
- accuracy() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
- accuracy() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns accuracy
- acos(Column) - Static method in class org.apache.spark.sql.functions
-
- acos(String) - Static method in class org.apache.spark.sql.functions
-
- acosh(Column) - Static method in class org.apache.spark.sql.functions
-
- acosh(String) - Static method in class org.apache.spark.sql.functions
-
- acquire(Seq<String>) - Method in interface org.apache.spark.resource.ResourceAllocator
-
Acquire a sequence of resource addresses (to a launched task), these addresses must be
available.
- actionNotAllowedOnTableSincePartitionMetadataNotStoredError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- actionNotAllowedOnTableWithFilesourcePartitionManagementDisabledError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ActivationFunction - Interface in org.apache.spark.ml.ann
-
Trait for functions and their derivatives for functional layers
- active() - Static method in class org.apache.spark.sql.SparkSession
-
Returns the currently active SparkSession, otherwise the default one.
- active() - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Returns a list of active queries associated with this SQLContext
- active() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- ACTIVE() - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
-
- ACTIVE_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
- activeIterator() - Method in interface org.apache.spark.ml.linalg.Vector
-
Returns an iterator over all the active elements of this vector.
- activeIterator() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Returns an iterator over all the active elements of this vector.
- activeStages() - Method in class org.apache.spark.status.LiveJob
-
- activeTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- activeTasks() - Method in class org.apache.spark.status.LiveJob
-
- activeTasks() - Method in class org.apache.spark.status.LiveStage
-
- activeTasksPerExecutor() - Method in class org.apache.spark.status.LiveStage
-
- add(Tuple2<Vector, Object>) - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
-
Add a new training instance to this ExpectationAggregator, update the weights,
means and covariances for each distributions, and update the log likelihood.
- add(org.apache.spark.ml.feature.InstanceBlock) - Method in class org.apache.spark.ml.clustering.KMeansAggregator
-
- add(Term) - Static method in class org.apache.spark.ml.feature.Dot
-
- add(Term) - Static method in class org.apache.spark.ml.feature.EmptyTerm
-
- add(Term) - Method in interface org.apache.spark.ml.feature.Term
-
Creates a summation term by concatenation of terms.
- add(Datum) - Method in interface org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
-
Add a single data point to this aggregator.
- add(double[], MultivariateGaussian[], ExpectationSum, Vector<Object>) - Static method in class org.apache.spark.mllib.clustering.ExpectationSum
-
- add(Vector) - Method in class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
-
Adds a new document.
- add(BlockMatrix) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Adds the given block matrix other
to this
block matrix: this + other
.
- add(Vector) - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Add a new sample to this summarizer, and update the statistical summary.
- add(StructField) - Method in class org.apache.spark.sql.types.StructType
-
- add(String, DataType) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new nullable field with no metadata.
- add(String, DataType, boolean) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field with no metadata.
- add(String, DataType, boolean, Metadata) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata.
- add(String, DataType, boolean, String) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata.
- add(String, String) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new nullable field with no metadata where the
dataType is specified as a String.
- add(String, String, boolean) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field with no metadata where the
dataType is specified as a String.
- add(String, String, boolean, Metadata) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata where the
dataType is specified as a String.
- add(String, String, boolean, String) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata where the
dataType is specified as a String.
- add(Long) - Method in class org.apache.spark.sql.util.MapperRowCounter
-
- add(double) - Method in class org.apache.spark.sql.util.NumericHistogram
-
Adds a new data point to the histogram approximation.
- add(T) - Method in class org.apache.spark.sql.util.SQLOpenHashSet
-
- add(long, long) - Static method in class org.apache.spark.streaming.util.RawTextHelper
-
- add(IN) - Method in class org.apache.spark.util.AccumulatorV2
-
Takes the inputs and accumulates.
- add(T) - Method in class org.apache.spark.util.CollectionAccumulator
-
- add(Double) - Method in class org.apache.spark.util.DoubleAccumulator
-
Adds v to the accumulator, i.e.
- add(double) - Method in class org.apache.spark.util.DoubleAccumulator
-
Adds v to the accumulator, i.e.
- add(Long) - Method in class org.apache.spark.util.LongAccumulator
-
Adds v to the accumulator, i.e.
- add(long) - Method in class org.apache.spark.util.LongAccumulator
-
Adds v to the accumulator, i.e.
- add(Object) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by one.
- add(Object, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by count
.
- add_months(Column, int) - Static method in class org.apache.spark.sql.functions
-
Returns the date that is numMonths
after startDate
.
- add_months(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the date that is numMonths
after startDate
.
- ADD_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
- ADD_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
- addAccumulatorUpdates(StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdatesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdatesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdatesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdatesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdatesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdatesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAddresses(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- addAddressesBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- addAllAccumulatorUpdates(Iterable<? extends StoreTypes.AccumulableInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAllAccumulatorUpdates(Iterable<? extends StoreTypes.AccumulableInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAllAccumulatorUpdates(Iterable<? extends StoreTypes.AccumulableInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAllAddresses(Iterable<String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- addAllAttempts(Iterable<? extends StoreTypes.ApplicationAttemptInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAllBlacklistedInStages(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 blacklisted_in_stages = 25;
- addAllBytesRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double bytes_read = 1;
- addAllBytesWritten(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double bytes_written = 1;
- addAllChildClusters(Iterable<? extends StoreTypes.RDDOperationClusterWrapper>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addAllChildNodes(Iterable<? extends StoreTypes.RDDOperationNode>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addAllClasspathEntries(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addAllCorruptMergedBlockChunks(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double corrupt_merged_block_chunks = 1;
- addAllDataDistribution(Iterable<? extends StoreTypes.RDDDataDistribution>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addAllDiskBytesSpilled(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double disk_bytes_spilled = 15;
- addAllDiskBytesSpilled(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double disk_bytes_spilled = 14;
- addAllDuration(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double duration = 2;
- addAllEdges(Iterable<? extends StoreTypes.RDDOperationEdge>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addAllEdges(Iterable<? extends StoreTypes.SparkPlanGraphEdge>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addAllExcludedInStages(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 excluded_in_stages = 31;
- addAllExecutorCpuTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_cpu_time = 6;
- addAllExecutorDeserializeCpuTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_cpu_time = 4;
- addAllExecutorDeserializeTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_time = 3;
- addAllExecutorMetrics(Iterable<? extends StoreTypes.ExecutorMetrics>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addAllExecutorRunTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_run_time = 5;
- addAllExecutors(Iterable<String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- addAllFailedTasks(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double failed_tasks = 3;
- addAllFetchWaitTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double fetch_wait_time = 5;
- addAllGettingResultTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double getting_result_time = 10;
- addAllHadoopProperties(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addAllIncomingEdges(Iterable<? extends StoreTypes.RDDOperationEdge>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addAllInputBytes(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_bytes = 6;
- addAllInputRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_records = 7;
- addAllJobIds(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
repeated int64 job_ids = 2;
- addAllJvmGcTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double jvm_gc_time = 8;
- addAllKilledTasks(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double killed_tasks = 5;
- addAllLocalBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double local_blocks_fetched = 4;
- addAllLocalMergedBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_blocks_fetched = 4;
- addAllLocalMergedBytesRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_bytes_read = 8;
- addAllLocalMergedChunksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_chunks_fetched = 6;
- addAllMemoryBytesSpilled(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double memory_bytes_spilled = 14;
- addAllMemoryBytesSpilled(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double memory_bytes_spilled = 13;
- addAllMergedFetchFallbackCount(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double merged_fetch_fallback_count = 2;
- addAllMetrics(Iterable<? extends StoreTypes.SQLPlanMetric>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addAllMetrics(Iterable<? extends StoreTypes.SQLPlanMetric>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addAllMetrics(Iterable<? extends StoreTypes.SQLPlanMetric>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addAllMetricsProperties(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addAllNodes(Iterable<? extends StoreTypes.SparkPlanGraphNodeWrapper>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addAllNodes(Iterable<? extends StoreTypes.SparkPlanGraphNodeWrapper>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addAllOutgoingEdges(Iterable<? extends StoreTypes.RDDOperationEdge>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addAllOutputBytes(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_bytes = 8;
- addAllOutputRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_records = 9;
- addAllPartitions(Iterable<? extends StoreTypes.RDDPartitionInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addAllPeakExecutionMemory(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double peak_execution_memory = 12;
- addAllQuantiles(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double quantiles = 1;
- addAllQuantiles(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated double quantiles = 1;
- addAllQuantiles(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double quantiles = 1;
- addAllRddIds(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated int64 rdd_ids = 43;
- addAllReadBytes(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_bytes = 1;
- addAllReadRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_records = 2;
- addAllRecordsRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double records_read = 2;
- addAllRecordsWritten(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double records_written = 2;
- addAllRemoteBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_blocks_fetched = 3;
- addAllRemoteBytesRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read = 6;
- addAllRemoteBytesReadToDisk(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read_to_disk = 7;
- addAllRemoteMergedBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_blocks_fetched = 3;
- addAllRemoteMergedBytesRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_bytes_read = 7;
- addAllRemoteMergedChunksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_chunks_fetched = 5;
- addAllRemoteMergedReqsDuration(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_reqs_duration = 9;
- addAllRemoteReqsDuration(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_reqs_duration = 9;
- addAllResourceProfiles(Iterable<? extends StoreTypes.ResourceProfileInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addAllResultSerializationTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_serialization_time = 9;
- addAllResultSize(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_size = 7;
- addAllSchedulerDelay(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double scheduler_delay = 11;
- addAllShuffleRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read = 10;
- addAllShuffleReadRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read_records = 11;
- addAllShuffleWrite(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write = 12;
- addAllShuffleWriteRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write_records = 13;
- addAllSkippedStages(Iterable<? extends Integer>) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
repeated int32 skipped_stages = 2;
- addAllSources(Iterable<? extends StoreTypes.SourceProgress>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addAllSparkProperties(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addAllStageIds(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated int64 stage_ids = 6;
- addAllStageIds(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
repeated int64 stage_ids = 2;
- addAllStages(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated int64 stages = 12;
- addAllStateOperators(Iterable<? extends StoreTypes.StateOperatorProgress>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addAllSucceededTasks(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double succeeded_tasks = 4;
- addAllSystemProperties(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addAllTaskTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double task_time = 2;
- addAllTotalBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double total_blocks_fetched = 8;
- addAllWriteBytes(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_bytes = 1;
- addAllWriteRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_records = 2;
- addAllWriteTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_time = 3;
- addAppArgs(String...) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds command line arguments for the application.
- addAppArgs(String...) - Method in class org.apache.spark.launcher.SparkLauncher
-
- addArchive(String) - Method in class org.apache.spark.SparkContext
-
:: Experimental ::
Add an archive to be downloaded and unpacked with this Spark job on every node.
- addAttempts(StoreTypes.ApplicationAttemptInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttempts(int, StoreTypes.ApplicationAttemptInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttempts(StoreTypes.ApplicationAttemptInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttempts(int, StoreTypes.ApplicationAttemptInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttemptsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttemptsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addBin(double, double, int) - Method in class org.apache.spark.sql.util.NumericHistogram
-
Set a particular histogram bin with index.
- addBinary(byte[]) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by one.
- addBinary(byte[], long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by count
.
- addBlacklistedInStages(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 blacklisted_in_stages = 25;
- addBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double bytes_read = 1;
- addBytesWritten(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double bytes_written = 1;
- addCatalogInCacheTableAsSelectNotAllowedError(String, SqlBaseParser.CacheTableContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
-
- addChildClusters(StoreTypes.RDDOperationClusterWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClusters(int, StoreTypes.RDDOperationClusterWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClusters(StoreTypes.RDDOperationClusterWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClusters(int, StoreTypes.RDDOperationClusterWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClustersBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClustersBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildNodes(StoreTypes.RDDOperationNode) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodes(int, StoreTypes.RDDOperationNode) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodes(StoreTypes.RDDOperationNode.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodes(int, StoreTypes.RDDOperationNode.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChunk(ShuffleBlockChunkId, RoaringBitmap) - Method in class org.apache.spark.storage.PushBasedFetchHelper
-
This is executed by the task thread when the iterator.next()
is invoked and the iterator
processes a response of type ShuffleBlockFetcherIterator.PushMergedLocalMetaFetchResult
.
- addClasspathEntries(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntries(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntries(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntries(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntriesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntriesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addColumnWithV1TableCannotSpecifyNotNullError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- addCorruptMergedBlockChunks(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double corrupt_merged_block_chunks = 1;
- addDataDistribution(StoreTypes.RDDDataDistribution) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistribution(int, StoreTypes.RDDDataDistribution) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistribution(StoreTypes.RDDDataDistribution.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistribution(int, StoreTypes.RDDDataDistribution.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistributionBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistributionBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDiskBytesSpilled(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double disk_bytes_spilled = 15;
- addDiskBytesSpilled(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double disk_bytes_spilled = 14;
- addDuration(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double duration = 2;
- addEdges(StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdges(int, StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdges(StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdges(int, StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdges(StoreTypes.SparkPlanGraphEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdges(int, StoreTypes.SparkPlanGraphEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdges(StoreTypes.SparkPlanGraphEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdges(int, StoreTypes.SparkPlanGraphEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdgesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdgesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addExcludedInStages(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 excluded_in_stages = 31;
- addExecutorCpuTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_cpu_time = 6;
- addExecutorDeserializeCpuTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_cpu_time = 4;
- addExecutorDeserializeTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_time = 3;
- addExecutorMetrics(StoreTypes.ExecutorMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetrics(int, StoreTypes.ExecutorMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetrics(StoreTypes.ExecutorMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetrics(int, StoreTypes.ExecutorMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorRunTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_run_time = 5;
- addExecutors(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- addExecutorsBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- addFailedTasks(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double failed_tasks = 3;
- addFetchWaitTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double fetch_wait_time = 5;
- addFile(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFile(String, boolean) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFile(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds a file to be submitted with the application.
- addFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
- addFile(String) - Method in class org.apache.spark.SparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFile(String, boolean) - Method in class org.apache.spark.SparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFilesWithAbsolutePathUnsupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- addFilter(ServletContextHandler, String, Map<String, String>) - Static method in class org.apache.spark.ui.JettyUtils
-
- addGettingResultTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double getting_result_time = 10;
- addGrid(Param<T>, Iterable<T>) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a param with multiple values (overwrites if the input param exists).
- addGrid(DoubleParam, double[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a double param with multiple values.
- addGrid(IntParam, int[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds an int param with multiple values.
- addGrid(FloatParam, float[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a float param with multiple values.
- addGrid(LongParam, long[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a long param with multiple values.
- addGrid(BooleanParam) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a boolean param with true and false.
- addHadoopProperties(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopProperties(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopPropertiesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addIncomingEdges(StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdges(int, StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdges(StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdges(int, StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdgesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addInputBytes(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_bytes = 6;
- addInputRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_records = 7;
- addJar(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
- addJar(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds a jar file to be submitted with the application.
- addJar(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
- addJar(String) - Method in class org.apache.spark.SparkContext
-
Adds a JAR dependency for all tasks to be executed on this SparkContext
in the future.
- addJarsToClassPath(String, MutableURLClassLoader) - Static method in class org.apache.spark.util.DependencyUtils
-
- addJarToClasspath(String, MutableURLClassLoader) - Static method in class org.apache.spark.util.DependencyUtils
-
- addJobIds(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
repeated int64 job_ids = 2;
- addJvmGcTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double jvm_gc_time = 8;
- addKilledTasks(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double killed_tasks = 5;
- addListener(SparkAppHandle.Listener) - Method in interface org.apache.spark.launcher.SparkAppHandle
-
Adds a listener to be notified of changes to the handle's information.
- addListener(StreamingQueryListener) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
- addListener(L) - Method in interface org.apache.spark.util.ListenerBus
-
Add a listener to listen events.
- addLocalBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double local_blocks_fetched = 4;
- addLocalConfiguration(String, int, int, int, JobConf) - Static method in class org.apache.spark.rdd.HadoopRDD
-
Add Hadoop configuration specific to a single partition and attempt.
- addLocalMergedBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_blocks_fetched = 4;
- addLocalMergedBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_bytes_read = 8;
- addLocalMergedChunksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_chunks_fetched = 6;
- addLong(long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by one.
- addLong(long, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by count
.
- addMapOutput(int, MapStatus) - Method in class org.apache.spark.ShuffleStatus
-
Register a map output.
- addMemoryBytesSpilled(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double memory_bytes_spilled = 14;
- addMemoryBytesSpilled(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double memory_bytes_spilled = 13;
- addMergedFetchFallbackCount(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double merged_fetch_fallback_count = 2;
- addMergeResult(int, org.apache.spark.scheduler.MergeStatus) - Method in class org.apache.spark.ShuffleStatus
-
Register a merge result.
- addMetrics(TaskMetrics, TaskMetrics) - Static method in class org.apache.spark.status.LiveEntityHelpers
-
Add m2 values to m1.
- addMetrics(StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetrics(int, StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetrics(StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetrics(int, StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetrics(StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetrics(int, StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetrics(StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetrics(int, StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetrics(StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetrics(int, StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetrics(StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetrics(int, StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetricsProperties(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsProperties(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsPropertiesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addNaN() - Method in class org.apache.spark.sql.util.SQLOpenHashSet
-
- addNewDefaultColumnToExistingTableNotAllowed(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- addNewFunctionMismatchedWithFunctionError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- addNodes(StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodes(int, StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodes(StoreTypes.SparkPlanGraphNodeWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodes(int, StoreTypes.SparkPlanGraphNodeWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodes(StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodes(int, StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodes(StoreTypes.SparkPlanGraphNodeWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodes(int, StoreTypes.SparkPlanGraphNodeWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNull() - Method in class org.apache.spark.sql.util.SQLOpenHashSet
-
- addOutgoingEdges(StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdges(int, StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdges(StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdges(int, StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdgesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutputBytes(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_bytes = 8;
- addOutputRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_records = 9;
- addPartition(LiveRDDPartition) - Method in class org.apache.spark.status.RDDPartitionSeq
-
- addPartitions(StoreTypes.RDDPartitionInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitions(int, StoreTypes.RDDPartitionInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitions(StoreTypes.RDDPartitionInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitions(int, StoreTypes.RDDPartitionInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitionsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitionsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartToPGroup(Partition, PartitionGroup) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
- addPeakExecutionMemory(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double peak_execution_memory = 12;
- addPyFile(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds a python file / zip / egg to be submitted with the application.
- addPyFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
- addQuantiles(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double quantiles = 1;
- addQuantiles(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated double quantiles = 1;
- addQuantiles(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double quantiles = 1;
- addRddIds(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated int64 rdd_ids = 43;
- addReadBytes(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_bytes = 1;
- addReadRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_records = 2;
- addRecordsRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double records_read = 2;
- addRecordsWritten(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double records_written = 2;
- addRemoteBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_blocks_fetched = 3;
- addRemoteBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read = 6;
- addRemoteBytesReadToDisk(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read_to_disk = 7;
- addRemoteMergedBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_blocks_fetched = 3;
- addRemoteMergedBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_bytes_read = 7;
- addRemoteMergedChunksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_chunks_fetched = 5;
- addRemoteMergedReqsDuration(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_reqs_duration = 9;
- addRemoteReqsDuration(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_reqs_duration = 9;
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
- addRequest(TaskResourceRequest) - Method in class org.apache.spark.resource.TaskResourceRequests
-
- addResourceProfiles(StoreTypes.ResourceProfileInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfiles(int, StoreTypes.ResourceProfileInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfiles(StoreTypes.ResourceProfileInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfiles(int, StoreTypes.ResourceProfileInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfilesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfilesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- address() - Method in class org.apache.spark.BarrierTaskInfo
-
- address() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
-
- ADDRESS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
- addresses() - Method in class org.apache.spark.resource.ResourceInformation
-
- addresses() - Method in class org.apache.spark.resource.ResourceInformationJson
-
- ADDRESSES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
-
- addResultSerializationTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_serialization_time = 9;
- addResultSize(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_size = 7;
- addSchedulable(Schedulable) - Method in interface org.apache.spark.scheduler.Schedulable
-
- addSchedulerDelay(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double scheduler_delay = 11;
- addShuffleRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read = 10;
- addShuffleReadRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read_records = 11;
- addShuffleWrite(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write = 12;
- addShuffleWriteRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write_records = 13;
- addShutdownHook(Function0<BoxedUnit>) - Static method in class org.apache.spark.util.ShutdownHookManager
-
Adds a shutdown hook with default priority.
- addShutdownHook(int, Function0<BoxedUnit>) - Static method in class org.apache.spark.util.ShutdownHookManager
-
Adds a shutdown hook with the given priority.
- addSkippedStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
repeated int32 skipped_stages = 2;
- addSources(StoreTypes.SourceProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSources(int, StoreTypes.SourceProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSources(StoreTypes.SourceProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSources(int, StoreTypes.SourceProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSourcesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSourcesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSparkArg(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds a no-value argument to the Spark invocation.
- addSparkArg(String, String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds an argument with a value to the Spark invocation.
- addSparkArg(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
- addSparkArg(String, String) - Method in class org.apache.spark.launcher.SparkLauncher
-
- addSparkListener(SparkListenerInterface) - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi ::
Register a listener to receive up-calls from events that happen during execution.
- addSparkProperties(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkProperties(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkPropertiesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addStageIds(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated int64 stage_ids = 6;
- addStageIds(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
repeated int64 stage_ids = 2;
- addStages(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated int64 stages = 12;
- addStateOperators(StoreTypes.StateOperatorProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperators(int, StoreTypes.StateOperatorProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperators(StoreTypes.StateOperatorProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperators(int, StoreTypes.StateOperatorProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperatorsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperatorsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.
- addStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.
- addString(String) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by one.
- addString(String, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by count
.
- addSucceededTasks(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double succeeded_tasks = 4;
- addSystemProperties(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemProperties(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemPropertiesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addTaskCompletionListener(TaskCompletionListener) - Method in class org.apache.spark.BarrierTaskContext
-
- addTaskCompletionListener(TaskCompletionListener) - Method in class org.apache.spark.TaskContext
-
Adds a (Java friendly) listener to be executed on task completion.
- addTaskCompletionListener(Function1<TaskContext, U>) - Method in class org.apache.spark.TaskContext
-
Adds a listener in the form of a Scala closure to be executed on task completion.
- addTaskFailureListener(TaskFailureListener) - Method in class org.apache.spark.BarrierTaskContext
-
- addTaskFailureListener(TaskFailureListener) - Method in class org.apache.spark.TaskContext
-
Adds a listener to be executed on task failure (which includes completion listener failure, if
the task body did not already fail).
- addTaskFailureListener(Function2<TaskContext, Throwable, BoxedUnit>) - Method in class org.apache.spark.TaskContext
-
Adds a listener to be executed on task failure (which includes completion listener failure, if
the task body did not already fail).
- addTaskResourceRequests(SparkConf, TaskResourceRequests) - Static method in class org.apache.spark.resource.ResourceUtils
-
- addTaskSetManager(Schedulable, Properties) - Method in interface org.apache.spark.scheduler.SchedulableBuilder
-
- addTaskTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double task_time = 2;
- addTime() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- addTime() - Method in class org.apache.spark.status.api.v1.ProcessSummary
-
- addTotalBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double total_blocks_fetched = 8;
- addURL(URL) - Method in class org.apache.spark.util.MutableURLClassLoader
-
- AddWebUIFilter(String, Map<String, String>, String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
-
- AddWebUIFilter$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter$
-
- addWriteBytes(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_bytes = 1;
- addWriteRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_records = 2;
- addWriteTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_time = 3;
- aesCryptoError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- aesModeUnsupportedError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- AFTSurvivalRegression - Class in org.apache.spark.ml.regression
-
- AFTSurvivalRegression(String) - Constructor for class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- AFTSurvivalRegression() - Constructor for class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- AFTSurvivalRegressionModel - Class in org.apache.spark.ml.regression
-
- AFTSurvivalRegressionParams - Interface in org.apache.spark.ml.regression
-
Params for accelerated failure time (AFT) regression.
- agg(Column, Column...) - Method in class org.apache.spark.sql.Dataset
-
Aggregates on the entire Dataset without groups.
- agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.Dataset
-
(Scala-specific) Aggregates on the entire Dataset without groups.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.Dataset
-
(Scala-specific) Aggregates on the entire Dataset without groups.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.Dataset
-
(Java-specific) Aggregates on the entire Dataset without groups.
- agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.Dataset
-
Aggregates on the entire Dataset without groups.
- agg(TypedColumn<V, U1>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregation, returning a
Dataset
of tuples for each unique key
and the result of computing this aggregation over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>, TypedColumn<V, U7>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>, TypedColumn<V, U7>, TypedColumn<V, U8>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(Column, Column...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute aggregates by specifying a series of aggregate columns.
- agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
(Scala-specific) Compute aggregates by specifying the column names and
aggregate methods.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
(Scala-specific) Compute aggregates by specifying a map from column name to
aggregate methods.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
(Java-specific) Compute aggregates by specifying a map from column name to
aggregate methods.
- agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute aggregates by specifying a series of aggregate columns.
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Aggregate the elements of each partition, and then the results for all the partitions, using
given combine functions and a neutral "zero value".
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Aggregate the elements of each partition, and then the results for all the partitions, using
given combine functions and a neutral "zero value".
- aggregate(Column, Column, Function2<Column, Column, Column>, Function1<Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Applies a binary operator to an initial state and all elements in the array,
and reduces this to a single state.
- aggregate(Column, Column, Function2<Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Applies a binary operator to an initial state and all elements in the array,
and reduces this to a single state.
- aggregateByKey(U, Partitioner, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, int, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, Partitioner, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, int, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- AggregatedDialect - Class in org.apache.spark.sql.jdbc
-
AggregatedDialect can unify multiple dialects into one virtual Dialect.
- AggregatedDialect(List<JdbcDialect>) - Constructor for class org.apache.spark.sql.jdbc.AggregatedDialect
-
- aggregateExpressionRequiredForPivotError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- aggregateInAggregateFilterError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- aggregateMessages(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, ClassTag<A>) - Method in class org.apache.spark.graphx.Graph
-
Aggregates values from the neighboring edges and vertices of each vertex.
- aggregateMessagesWithActiveSet(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, Option<Tuple2<VertexRDD<?>, EdgeDirection>>, ClassTag<A>) - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- aggregateUsingIndex(RDD<Tuple2<Object, VD2>>, Function2<VD2, VD2, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- aggregateUsingIndex(RDD<Tuple2<Object, VD2>>, Function2<VD2, VD2, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.VertexRDD
-
Aggregates vertices in messages
that have the same ids using reduceFunc
, returning a
VertexRDD co-indexed with this
.
- AggregatingEdgeContext<VD,ED,A> - Class in org.apache.spark.graphx.impl
-
- AggregatingEdgeContext(Function2<A, A, A>, Object, BitSet) - Constructor for class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- aggregationDepth() - Method in class org.apache.spark.ml.classification.LinearSVC
-
- aggregationDepth() - Method in class org.apache.spark.ml.classification.LinearSVCModel
-
- aggregationDepth() - Method in class org.apache.spark.ml.classification.LogisticRegression
-
- aggregationDepth() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- aggregationDepth() - Method in class org.apache.spark.ml.clustering.GaussianMixture
-
- aggregationDepth() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- aggregationDepth() - Method in interface org.apache.spark.ml.param.shared.HasAggregationDepth
-
Param for suggested depth for treeAggregate (>= 2).
- aggregationDepth() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- aggregationDepth() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- aggregationDepth() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- aggregationDepth() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- aggregationDepth() - Method in class org.apache.spark.ml.regression.LinearRegression
-
- aggregationDepth() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- aggregationFunctionAppliedOnNonNumericColumnError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- aggregationFunctionAppliedOnNonNumericColumnError(String, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- Aggregator<K,V,C> - Class in org.apache.spark
-
:: DeveloperApi ::
A set of functions used to aggregate data.
- Aggregator(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>) - Constructor for class org.apache.spark.Aggregator
-
- aggregator() - Method in class org.apache.spark.ShuffleDependency
-
- Aggregator<IN,BUF,OUT> - Class in org.apache.spark.sql.expressions
-
A base class for user-defined aggregations, which can be used in Dataset
operations to take
all of the elements of a group and reduce them to a single value.
- Aggregator() - Constructor for class org.apache.spark.sql.expressions.Aggregator
-
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
-
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
-
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
-
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
-
- aic() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
- Algo - Class in org.apache.spark.mllib.tree.configuration
-
Enum to select the algorithm for the decision tree
- Algo() - Constructor for class org.apache.spark.mllib.tree.configuration.Algo
-
- algo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- algo() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
- algo() - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
- algo() - Method in class org.apache.spark.mllib.tree.model.RandomForestModel
-
- algorithm() - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
- alias(String) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias.
- alias(String) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with an alias set.
- alias(Symbol) - Method in class org.apache.spark.sql.Dataset
-
(Scala-specific) Returns a new Dataset with an alias set.
- aliasesNumberNotMatchUDTFOutputError(int, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- aliasNumberNotMatchColumnNumberError(int, int, TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- All - Static variable in class org.apache.spark.graphx.TripletFields
-
Expose all the fields (source, edge, and destination).
- ALL_GATHER() - Static method in class org.apache.spark.RequestMethod
-
- ALL_REMOVALS_TIME_MS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
- ALL_UPDATES_TIME_MS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
- allGather(String) - Method in class org.apache.spark.BarrierTaskContext
-
:: Experimental ::
Blocks until all tasks in the same stage have reached this routine.
- AllJobsCancelled - Class in org.apache.spark.scheduler
-
- AllJobsCancelled() - Constructor for class org.apache.spark.scheduler.AllJobsCancelled
-
- allocate(int) - Method in class org.apache.spark.sql.util.NumericHistogram
-
Sets the number of histogram bins to use for approximating data.
- allocator() - Method in class org.apache.spark.storage.memory.SerializedValuesHolder
-
- AllReceiverIds - Class in org.apache.spark.streaming.scheduler
-
A message used by ReceiverTracker to ask all receiver's ids still stored in
ReceiverTrackerEndpoint.
- AllReceiverIds() - Constructor for class org.apache.spark.streaming.scheduler.AllReceiverIds
-
- allRemovalsTimeMs() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
-
- allSources() - Static method in class org.apache.spark.metrics.source.StaticSources
-
The set of all static sources.
- allSupportedExecutorResources() - Static method in class org.apache.spark.resource.ResourceProfile
-
Return all supported Spark built-in executor resources, custom resources like GPUs/FPGAs
are excluded.
- allUpdatesTimeMs() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
-
- alpha() - Method in class org.apache.spark.ml.recommendation.ALS
-
- alpha() - Method in interface org.apache.spark.ml.recommendation.ALSParams
-
Param for the alpha parameter in the implicit preference formulation (nonnegative).
- alpha() - Method in class org.apache.spark.mllib.random.WeibullGenerator
-
- ALS - Class in org.apache.spark.ml.recommendation
-
Alternating Least Squares (ALS) matrix factorization.
- ALS(String) - Constructor for class org.apache.spark.ml.recommendation.ALS
-
- ALS() - Constructor for class org.apache.spark.ml.recommendation.ALS
-
- ALS - Class in org.apache.spark.mllib.recommendation
-
Alternating Least Squares matrix factorization.
- ALS() - Constructor for class org.apache.spark.mllib.recommendation.ALS
-
Constructs an ALS instance with default parameters: {numBlocks: -1, rank: 10, iterations: 10,
lambda: 0.01, implicitPrefs: false, alpha: 1.0}.
- ALS.InBlock$ - Class in org.apache.spark.ml.recommendation
-
- ALS.LeastSquaresNESolver - Interface in org.apache.spark.ml.recommendation
-
Trait for least squares solvers applied to the normal equation.
- ALS.Rating<ID> - Class in org.apache.spark.ml.recommendation
-
Rating class for better code readability.
- ALS.Rating$ - Class in org.apache.spark.ml.recommendation
-
- ALS.RatingBlock$ - Class in org.apache.spark.ml.recommendation
-
- ALSModel - Class in org.apache.spark.ml.recommendation
-
Model fitted by ALS.
- ALSModelParams - Interface in org.apache.spark.ml.recommendation
-
Common params for ALS and ALSModel.
- ALSParams - Interface in org.apache.spark.ml.recommendation
-
Common params for ALS.
- alterAddColNotSupportDatasourceTableError(Object, TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterAddColNotSupportViewError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterColumnCannotFindColumnInV1TableError(String, V1Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterColumnWithV1TableCannotSpecifyNotNullError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterDatabaseLocationUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.H2Dialect
-
- alterTable(String, Seq<TableChange>, int) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Alter an existing table.
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
-
- alterTableChangeColumnNotSupportedForColumnTypeError(StructField, StructField) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterTableRecoverPartitionsNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterTableSerDePropertiesNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterTableSetSerdeForSpecificPartitionNotSupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterTableSetSerdeNotSupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterTableWithDropPartitionAndPurgeUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- alterV2TableSetLocationWithPartitionNotSupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- AlwaysFalse - Class in org.apache.spark.sql.sources
-
A filter that always evaluates to false
.
- AlwaysFalse() - Constructor for class org.apache.spark.sql.sources.AlwaysFalse
-
- AlwaysTrue - Class in org.apache.spark.sql.sources
-
A filter that always evaluates to true
.
- AlwaysTrue() - Constructor for class org.apache.spark.sql.sources.AlwaysTrue
-
- am() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
-
- ambiguousAttributesInSelfJoinError(Seq<AttributeReference>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ambiguousColumnOrFieldError(Seq<String>, int, Origin) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ambiguousColumnOrFieldError(Seq<String>, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ambiguousLateralColumnAliasError(String, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ambiguousLateralColumnAliasError(Seq<String>, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ambiguousReferenceError(String, Seq<Attribute>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ambiguousReferenceToFieldsError(String, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ambiguousRelationAliasNameInNestedCTEError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- amount() - Method in class org.apache.spark.resource.ExecutorResourceRequest
-
- amount() - Method in class org.apache.spark.resource.ResourceRequest
-
- AMOUNT() - Static method in class org.apache.spark.resource.ResourceUtils
-
- amount() - Method in class org.apache.spark.resource.TaskResourceRequest
-
- AMOUNT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
-
- AMOUNT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
-
- AnalysisException - Exception in org.apache.spark.sql
-
Thrown when a query fails to analyze, usually because the query itself is invalid.
- AnalysisException(String, Map<String, String>, Option<Throwable>) - Constructor for exception org.apache.spark.sql.AnalysisException
-
- AnalysisException(String, Map<String, String>, QueryContext[], String) - Constructor for exception org.apache.spark.sql.AnalysisException
-
- AnalysisException(String, Map<String, String>) - Constructor for exception org.apache.spark.sql.AnalysisException
-
- AnalysisException(String, Map<String, String>, Origin) - Constructor for exception org.apache.spark.sql.AnalysisException
-
- AnalysisException(String, Map<String, String>, Origin, Option<Throwable>) - Constructor for exception org.apache.spark.sql.AnalysisException
-
- analyzeTableNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- analyzeTableNotSupportedOnViewsError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- analyzingColumnStatisticsNotSupportedForColumnTypeError(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- and(Column) - Method in class org.apache.spark.sql.Column
-
Boolean AND.
- And - Class in org.apache.spark.sql.sources
-
A filter that evaluates to true
iff both left
or right
evaluate to true
.
- And(Filter, Filter) - Constructor for class org.apache.spark.sql.sources.And
-
- ANOVATest - Class in org.apache.spark.ml.stat
-
ANOVA Test for continuous data.
- ANOVATest() - Constructor for class org.apache.spark.ml.stat.ANOVATest
-
- ansiDateTimeError(Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- ansiDateTimeParseError(Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- ansiIllegalArgumentError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- ansiIllegalArgumentError(IllegalArgumentException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- antecedent() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
-
- ANY() - Static method in class org.apache.spark.scheduler.TaskLocality
-
- AnyDataType - Class in org.apache.spark.sql.types
-
An AbstractDataType
that matches any concrete data types.
- AnyDataType() - Constructor for class org.apache.spark.sql.types.AnyDataType
-
- anyNull() - Method in interface org.apache.spark.sql.Row
-
Returns true if there are any NULL values in this row.
- anyNull() - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
-
- anyNull() - Method in class org.apache.spark.sql.vectorized.ColumnarRow
-
- AnyTimestampType - Class in org.apache.spark.sql.types
-
- AnyTimestampType() - Constructor for class org.apache.spark.sql.types.AnyTimestampType
-
- ApiHelper - Class in org.apache.spark.ui.jobs
-
- ApiHelper() - Constructor for class org.apache.spark.ui.jobs.ApiHelper
-
- ApiRequestContext - Interface in org.apache.spark.status.api.v1
-
- APP_SPARK_VERSION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
- appAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- append() - Method in class org.apache.spark.sql.DataFrameWriterV2
-
Append the contents of the data frame to the output table.
- Append() - Static method in class org.apache.spark.sql.streaming.OutputMode
-
OutputMode in which only the new rows in the streaming DataFrame/Dataset will be
written to the sink.
- appendBias(Vector) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Returns a new vector with 1.0
(bias) appended to the input vector.
- appendColumn(StructType, String, DataType, boolean) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Appends a new column to the input schema.
- appendColumn(StructType, StructField) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Appends a new column to the input schema.
- AppHistoryServerPlugin - Interface in org.apache.spark.status
-
An interface for creating history listeners(to replay event logs) defined in other modules like
SQL, and setup the UI of the plugin to rebuild the history UI.
- appId() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- appId() - Method in interface org.apache.spark.status.api.v1.BaseAppResource
-
- appId() - Method in class org.apache.spark.storage.ShuffleMergedDataBlockId
-
- appId() - Method in class org.apache.spark.storage.ShuffleMergedIndexBlockId
-
- appId() - Method in class org.apache.spark.storage.ShuffleMergedMetaBlockId
-
- APPLICATION_EXECUTOR_LIMIT() - Static method in class org.apache.spark.ui.ToolTips
-
- APPLICATION_MASTER() - Static method in class org.apache.spark.metrics.MetricsSystemInstances
-
- applicationAttemptId() - Method in interface org.apache.spark.scheduler.SchedulerBackend
-
Get the attempt ID for this run, if the cluster manager supports multiple
attempts.
- applicationAttemptId() - Method in interface org.apache.spark.scheduler.TaskScheduler
-
Get an application's attempt ID associated with the job.
- applicationAttemptId() - Method in class org.apache.spark.SparkContext
-
- ApplicationAttemptInfo - Class in org.apache.spark.status.api.v1
-
- applicationEndFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- applicationEndToJson(SparkListenerApplicationEnd, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- ApplicationEnvironmentInfo - Class in org.apache.spark.status.api.v1
-
- applicationId() - Method in interface org.apache.spark.scheduler.SchedulerBackend
-
Get an application ID associated with the job.
- applicationId() - Method in interface org.apache.spark.scheduler.TaskScheduler
-
Get an application ID associated with the job.
- applicationId() - Method in class org.apache.spark.SparkContext
-
A unique identifier for the Spark application.
- ApplicationInfo - Class in org.apache.spark.status.api.v1
-
- APPLICATIONS() - Static method in class org.apache.spark.metrics.MetricsSystemInstances
-
- applicationStartFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- applicationStartToJson(SparkListenerApplicationStart, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- ApplicationStatus - Enum in org.apache.spark.status.api.v1
-
- apply(T1) - Static method in class org.apache.spark.CleanAccum
-
- apply(T1) - Static method in class org.apache.spark.CleanBroadcast
-
- apply(T1) - Static method in class org.apache.spark.CleanCheckpoint
-
- apply(T1) - Static method in class org.apache.spark.CleanRDD
-
- apply(T1) - Static method in class org.apache.spark.CleanShuffle
-
- apply(T1) - Static method in class org.apache.spark.CleanSparkListener
-
- apply(T1, T2) - Static method in class org.apache.spark.ContextBarrierId
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.ErrorInfo
-
- apply(int) - Static method in class org.apache.spark.ErrorMessageFormat
-
- apply(T1) - Static method in class org.apache.spark.ErrorSubInfo
-
- apply(T1, T2, T3, T4, T5, T6, T7, T8) - Static method in class org.apache.spark.ExceptionFailure
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.ExecutorLostFailure
-
- apply(T1) - Static method in class org.apache.spark.ExecutorRegistered
-
- apply(T1) - Static method in class org.apache.spark.ExecutorRemoved
-
- apply(T1, T2, T3, T4, T5, T6) - Static method in class org.apache.spark.FetchFailed
-
- apply(RDD<Tuple2<Object, VD>>, RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.Graph
-
Construct a graph from a collection of vertices and
edges with attributes.
- apply(RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from edges, setting referenced vertices to defaultVertexAttr
.
- apply(RDD<Tuple2<Object, VD>>, RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from vertices and edges, setting missing vertices to defaultVertexAttr
.
- apply(VertexRDD<VD>, EdgeRDD<ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from a VertexRDD and an EdgeRDD with arbitrary replicated vertices.
- apply(Graph<VD, ED>, A, int, EdgeDirection, Function3<Object, VD, A, VD>, Function1<EdgeTriplet<VD, ED>, Iterator<Tuple2<Object, A>>>, Function2<A, A, A>, ClassTag<VD>, ClassTag<ED>, ClassTag<A>) - Static method in class org.apache.spark.graphx.Pregel
-
Execute a Pregel-like iterative vertex-parallel abstraction.
- apply(RDD<Tuple2<Object, VD>>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
-
Constructs a standalone
VertexRDD
(one that is not set up for efficient joins with an
EdgeRDD
) from an RDD of vertex-attribute pairs.
- apply(RDD<Tuple2<Object, VD>>, EdgeRDD<?>, VD, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
-
Constructs a VertexRDD
from an RDD of vertex-attribute pairs.
- apply(RDD<Tuple2<Object, VD>>, EdgeRDD<?>, VD, Function2<VD, VD, VD>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
-
Constructs a VertexRDD
from an RDD of vertex-attribute pairs.
- apply(DenseMatrix<Object>, DenseMatrix<Object>, Function1<Object, Object>) - Static method in class org.apache.spark.ml.ann.ApplyInPlace
-
- apply(DenseMatrix<Object>, DenseMatrix<Object>, DenseMatrix<Object>, Function2<Object, Object, Object>) - Static method in class org.apache.spark.ml.ann.ApplyInPlace
-
- apply(String) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Gets an attribute by its name.
- apply(int) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Gets an attribute by its index.
- apply(T1, T2) - Static method in class org.apache.spark.ml.clustering.ClusterData
-
- apply(T1, T2) - Static method in class org.apache.spark.ml.feature.LabeledPoint
-
- apply(int, int) - Method in class org.apache.spark.ml.linalg.DenseMatrix
-
- apply(int) - Method in class org.apache.spark.ml.linalg.DenseVector
-
- apply(int, int) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Gets the (i, j)-th element.
- apply(int, int) - Method in class org.apache.spark.ml.linalg.SparseMatrix
-
- apply(int) - Method in class org.apache.spark.ml.linalg.SparseVector
-
- apply(int) - Method in interface org.apache.spark.ml.linalg.Vector
-
Gets the value of the ith element.
- apply(Param<T>) - Method in class org.apache.spark.ml.param.ParamMap
-
Gets the value of the input param or its default value if it does not exist.
- apply(GeneralizedLinearRegressionBase) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink$
-
Constructs the FamilyAndLink object from a parameter map
- apply(T1) - Static method in class org.apache.spark.ml.SaveInstanceEnd
-
- apply(T1) - Static method in class org.apache.spark.ml.SaveInstanceStart
-
- apply() - Static method in class org.apache.spark.ml.TransformEnd
-
- apply() - Static method in class org.apache.spark.ml.TransformStart
-
- apply(Split) - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData$
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
-
- apply(Row) - Method in class org.apache.spark.mllib.clustering.KMeansModel.Cluster$
-
- apply(BinaryConfusionMatrix) - Method in interface org.apache.spark.mllib.evaluation.binary.BinaryClassificationMetricComputer
-
- apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.FalsePositiveRate
-
- apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.Precision
-
- apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.Recall
-
- apply(T1) - Static method in class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data
-
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.mllib.feature.VocabWord
-
- apply(int, int) - Method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- apply(int) - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- apply(T1, T2) - Static method in class org.apache.spark.mllib.linalg.distributed.IndexedRow
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
-
- apply(int, int) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Gets the (i, j)-th element.
- apply(int, int) - Method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- apply(int) - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- apply(int) - Method in interface org.apache.spark.mllib.linalg.Vector
-
Gets the value of the ith element.
- apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.recommendation.Rating
-
- apply(T1, T2) - Static method in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data
-
- apply(T1, T2) - Static method in class org.apache.spark.mllib.stat.test.BinarySample
-
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.Algo
-
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
-
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
-
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
-
- apply(int, Node) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData$
-
- apply(Row) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData$
-
- apply(int, Node) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
-
- apply(Row) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
-
- apply(Predict) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData$
-
- apply(Row) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData$
-
- apply(Predict) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
-
- apply(Row) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
-
- apply(Split) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData$
-
- apply(Row) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData$
-
- apply(Split) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
-
- apply(Row) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
-
- apply(int, Predict, double, boolean) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Construct a node with nodeIndex, predict, impurity and isLeaf parameters.
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.mllib.tree.model.Split
-
- apply(int) - Static method in class org.apache.spark.rdd.CheckpointState
-
- apply(int) - Static method in class org.apache.spark.rdd.DeterministicLevel
-
- apply(int) - Static method in class org.apache.spark.RequestMethod
-
- apply(T1, T2) - Static method in class org.apache.spark.resource.ResourceInformationJson
-
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.scheduler.AccumulableInfo
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- apply(String, long, Enumeration.Value, ByteBuffer, int, Map<String, ResourceInformation>) - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate$
-
Alternate factory method that takes a ByteBuffer directly for the data field
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.ExcludedExecutor
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.local.KillTask
-
- apply() - Static method in class org.apache.spark.scheduler.local.ReviveOffers
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.local.StatusUpdate
-
- apply() - Static method in class org.apache.spark.scheduler.local.StopExecutor
-
- apply(long, TaskMetrics) - Static method in class org.apache.spark.scheduler.RuntimePercentage
-
- apply(int) - Static method in class org.apache.spark.scheduler.SchedulingMode
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerApplicationEnd
-
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
-
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerBlockUpdated
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
Deprecated.
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
-
Deprecated.
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorExcluded
-
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorExcludedForStage
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
-
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
-
Deprecated.
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnexcluded
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerJobEnd
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.scheduler.SparkListenerJobStart
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerLogStart
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerMiscellaneousProcessAdded
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
Deprecated.
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
-
Deprecated.
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerNodeExcluded
-
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerNodeExcludedForStage
-
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
-
Deprecated.
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnexcluded
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerResourceProfileAdded
-
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerStageCompleted
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
-
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerStageSubmitted
-
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.scheduler.SparkListenerTaskEnd
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerTaskGettingResult
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerTaskStart
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerUnpersistRDD
-
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetAdded
-
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetRemoved
-
- apply(int) - Static method in class org.apache.spark.scheduler.TaskLocality
-
- apply(Object) - Method in class org.apache.spark.sql.Column
-
Extracts a value or values from a complex type.
- apply(String) - Method in class org.apache.spark.sql.Dataset
-
Selects column based on the column name and returns it as a
Column
.
- apply(Column...) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.
Creates a Column
for this UDAF using given Column
s as input arguments.
- apply(Seq<Column>) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.
Creates a Column
for this UDAF using given Column
s as input arguments.
- apply(Column...) - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Returns an expression that invokes the UDF, using the given arguments.
- apply(Seq<Column>) - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Returns an expression that invokes the UDF, using the given arguments.
- apply(T1, T2) - Static method in class org.apache.spark.sql.jdbc.JdbcType
-
- apply() - Static method in class org.apache.spark.sql.Observation
-
Observation constructor for creating an anonymous observation.
- apply(String) - Static method in class org.apache.spark.sql.Observation
-
Observation constructor for creating a named observation.
- apply(Dataset<Row>, Seq<Expression>, RelationalGroupedDataset.GroupType) - Static method in class org.apache.spark.sql.RelationalGroupedDataset
-
- apply(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i.
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.And
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.EqualNullSafe
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.EqualTo
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.GreaterThan
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.GreaterThanOrEqual
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.In
-
- apply(T1) - Static method in class org.apache.spark.sql.sources.IsNotNull
-
- apply(T1) - Static method in class org.apache.spark.sql.sources.IsNull
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.LessThan
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.LessThanOrEqual
-
- apply(T1) - Static method in class org.apache.spark.sql.sources.Not
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.Or
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.StringContains
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.StringEndsWith
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.StringStartsWith
-
- apply(String, Option<Object>, Map<String, String>) - Static method in class org.apache.spark.sql.streaming.SinkProgress
-
- apply(DataType) - Static method in class org.apache.spark.sql.types.ArrayType
-
Construct a
ArrayType
object with the given element type.
- apply(T1) - Static method in class org.apache.spark.sql.types.CharType
-
- apply() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
-
- apply(byte) - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
-
- apply(double) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(long) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(int) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigDecimal) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigDecimal) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigInteger) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigInt) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigDecimal, int, int) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigDecimal, int, int) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(long, int, int) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(String) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(DataType, DataType) - Static method in class org.apache.spark.sql.types.MapType
-
Construct a
MapType
object with the given key type and value type.
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.sql.types.StructField
-
- apply(String) - Method in class org.apache.spark.sql.types.StructType
-
- apply(Set<String>) - Method in class org.apache.spark.sql.types.StructType
-
Returns a
StructType
containing
StructField
s of the given names, preserving the
original order of fields.
- apply(int) - Method in class org.apache.spark.sql.types.StructType
-
- apply(T1) - Static method in class org.apache.spark.sql.types.VarcharType
-
- apply() - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
-
- apply(byte) - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
-
- apply(T1, T2, T3, T4, T5, T6, T7, T8) - Static method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.status.api.v1.ApplicationInfo
-
- apply(T1, T2) - Static method in class org.apache.spark.status.api.v1.sql.Metric
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.status.api.v1.sql.Node
-
- apply(T1) - Static method in class org.apache.spark.status.api.v1.StackTrace
-
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.status.api.v1.ThreadStackTrace
-
- apply(int) - Method in class org.apache.spark.status.RDDPartitionSeq
-
- apply(String) - Static method in class org.apache.spark.storage.BlockId
-
- apply(String, String, int, Option<String>) - Static method in class org.apache.spark.storage.BlockManagerId
-
- apply(ObjectInput) - Static method in class org.apache.spark.storage.BlockManagerId
-
- apply(T1, T2) - Static method in class org.apache.spark.storage.BroadcastBlockId
-
- apply(T1, T2) - Static method in class org.apache.spark.storage.RDDBlockId
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleBlockBatchId
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleBlockChunkId
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleBlockId
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleChecksumBlockId
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleDataBlockId
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleIndexBlockId
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleMergedBlockId
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleMergedDataBlockId
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleMergedIndexBlockId
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleMergedMetaBlockId
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShufflePushBlockId
-
- apply(boolean, boolean, boolean, boolean, int) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi ::
Create a new StorageLevel object.
- apply(boolean, boolean, boolean, int) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi ::
Create a new StorageLevel object without setting useOffHeap.
- apply(int, int) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi ::
Create a new StorageLevel object from its integer representation.
- apply(ObjectInput) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi ::
Read StorageLevel object from ObjectInput stream.
- apply(T1, T2) - Static method in class org.apache.spark.storage.StreamBlockId
-
- apply(T1) - Static method in class org.apache.spark.storage.TaskResultBlockId
-
- apply(T1) - Static method in class org.apache.spark.streaming.Duration
-
- apply(long) - Static method in class org.apache.spark.streaming.Milliseconds
-
- apply(long) - Static method in class org.apache.spark.streaming.Minutes
-
- apply(T1, T2, T3, T4, T5, T6) - Static method in class org.apache.spark.streaming.scheduler.BatchInfo
-
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
-
- apply(T1, T2, T3, T4, T5, T6, T7, T8) - Static method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- apply(int) - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
-
- apply(long) - Static method in class org.apache.spark.streaming.Seconds
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.TaskCommitDenied
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.TaskKilled
-
- apply(int) - Static method in class org.apache.spark.TaskState
-
- apply(TraversableOnce<Object>) - Static method in class org.apache.spark.util.StatCounter
-
Build a StatCounter from a list of values.
- apply(Seq<Object>) - Static method in class org.apache.spark.util.StatCounter
-
Build a StatCounter from a list of values passed as variable-length arguments.
- ApplyInPlace - Class in org.apache.spark.ml.ann
-
Implements in-place application of functions in the arrays
- ApplyInPlace() - Constructor for class org.apache.spark.ml.ann.ApplyInPlace
-
- applySchema(RDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
-
- applySchema(JavaRDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
-
- applySchema(RDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
-
- applySchema(JavaRDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
-
- appName() - Method in class org.apache.spark.api.java.JavaSparkContext
-
- appName() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- appName() - Method in class org.apache.spark.SparkContext
-
- appName(String) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a name for the application, which will be shown in the Spark web UI.
- approx_count_distinct(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approx_count_distinct(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approx_count_distinct(Column, double) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approx_count_distinct(String, double) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approxCountDistinct(Column) - Static method in class org.apache.spark.sql.functions
-
- approxCountDistinct(String) - Static method in class org.apache.spark.sql.functions
-
- approxCountDistinct(Column, double) - Static method in class org.apache.spark.sql.functions
-
- approxCountDistinct(String, double) - Static method in class org.apache.spark.sql.functions
-
- ApproxHist() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
-
- ApproximateEvaluator<U,R> - Interface in org.apache.spark.partial
-
An object that computes a function incrementally by merging in results of type U from multiple
tasks.
- approxQuantile(String, double[], double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Calculates the approximate quantiles of a numerical column of a DataFrame.
- approxQuantile(String[], double[], double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Calculates the approximate quantiles of numerical columns of a DataFrame.
- appSparkVersion() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- AppStatusUtils - Class in org.apache.spark.status
-
- AppStatusUtils() - Constructor for class org.apache.spark.status.AppStatusUtils
-
- archives() - Method in class org.apache.spark.SparkContext
-
- AreaUnderCurve - Class in org.apache.spark.mllib.evaluation
-
Computes the area under the curve (AUC) using the trapezoidal rule.
- AreaUnderCurve() - Constructor for class org.apache.spark.mllib.evaluation.AreaUnderCurve
-
- areaUnderPR() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Computes the area under the precision-recall curve.
- areaUnderROC() - Method in interface org.apache.spark.ml.classification.BinaryClassificationSummary
-
Computes the area under the receiver operating characteristic (ROC) curve.
- areaUnderROC() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
-
- areaUnderROC() - Method in class org.apache.spark.ml.classification.BinaryRandomForestClassificationSummaryImpl
-
- areaUnderROC() - Method in class org.apache.spark.ml.classification.FMClassificationSummaryImpl
-
- areaUnderROC() - Method in class org.apache.spark.ml.classification.LinearSVCSummaryImpl
-
- areaUnderROC() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Computes the area under the receiver operating characteristic (ROC) curve.
- argmax() - Method in class org.apache.spark.ml.linalg.DenseVector
-
- argmax() - Method in class org.apache.spark.ml.linalg.SparseVector
-
- argmax() - Method in interface org.apache.spark.ml.linalg.Vector
-
Find the index of a maximal element.
- argmax() - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- argmax() - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- argmax() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Find the index of a maximal element.
- arithmeticOverflowError(ArithmeticException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- arithmeticOverflowError(String, String, SQLQueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- ARPACK - Class in org.apache.spark.mllib.linalg
-
ARPACK routines for MLlib's vectors and matrices.
- ARPACK() - Constructor for class org.apache.spark.mllib.linalg.ARPACK
-
- array(DataType) - Method in class org.apache.spark.sql.ColumnName
-
Creates a new StructField
of type array.
- array(Column...) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array(String, String...) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array() - Method in class org.apache.spark.sql.vectorized.ColumnarArray
-
- array_append(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Returns an ARRAY containing all elements from the source ARRAY as well as the new element.
- array_compact(Column) - Static method in class org.apache.spark.sql.functions
-
Remove all null elements from the given array.
- array_contains(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Returns null if the array is null, true if the array contains value
, and false otherwise.
- array_distinct(Column) - Static method in class org.apache.spark.sql.functions
-
Removes duplicate values from the array.
- array_except(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an array of the elements in the first array but not in the second array,
without duplicates.
- array_insert(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Adds an item into a given array at a specified position
- array_intersect(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an array of the elements in the intersection of the given two arrays,
without duplicates.
- array_join(Column, String, String) - Static method in class org.apache.spark.sql.functions
-
Concatenates the elements of column
using the delimiter
.
- array_join(Column, String) - Static method in class org.apache.spark.sql.functions
-
Concatenates the elements of column
using the delimiter
.
- array_max(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the maximum value in the array.
- array_min(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the minimum value in the array.
- array_position(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Locates the position of the first occurrence of the value in the given array as long.
- array_remove(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Remove all elements that equal to element from the given array.
- array_repeat(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Creates an array containing the left argument repeated the number of times given by the
right argument.
- array_repeat(Column, int) - Static method in class org.apache.spark.sql.functions
-
Creates an array containing the left argument repeated the number of times given by the
right argument.
- array_sort(Column) - Static method in class org.apache.spark.sql.functions
-
Sorts the input array in ascending order.
- array_sort(Column, Function2<Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Sorts the input array based on the given comparator function.
- array_to_vector(Column) - Static method in class org.apache.spark.ml.functions
-
Converts a column of array of numeric type into a column of dense vectors in MLlib.
- array_union(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an array of the elements in the union of the given two arrays, without duplicates.
- arrayComponentTypeUnsupportedError(Class<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- arrayLengthGt(double) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check that the array length is greater than lowerBound.
- arrays_overlap(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns true
if a1
and a2
have at least one non-null element in common.
- arrays_zip(Column...) - Static method in class org.apache.spark.sql.functions
-
Returns a merged array of structs in which the N-th struct contains all N-th values of input
arrays.
- arrays_zip(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Returns a merged array of structs in which the N-th struct contains all N-th values of input
arrays.
- ArrayType - Class in org.apache.spark.sql.types
-
- ArrayType(DataType, boolean) - Constructor for class org.apache.spark.sql.types.ArrayType
-
- arrayValues() - Method in class org.apache.spark.storage.memory.DeserializedValuesHolder
-
- ArrowColumnVector - Class in org.apache.spark.sql.vectorized
-
A column vector backed by Apache Arrow.
- ArrowColumnVector(ValueVector) - Constructor for class org.apache.spark.sql.vectorized.ArrowColumnVector
-
- ArrowUtils - Class in org.apache.spark.sql.util
-
- ArrowUtils() - Constructor for class org.apache.spark.sql.util.ArrowUtils
-
- as(Encoder<U>) - Method in class org.apache.spark.sql.Column
-
Provides a type hint about the expected return value of this column.
- as(String) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias.
- as(Seq<String>) - Method in class org.apache.spark.sql.Column
-
(Scala-specific) Assigns the given aliases to the results of a table generating function.
- as(String[]) - Method in class org.apache.spark.sql.Column
-
Assigns the given aliases to the results of a table generating function.
- as(Symbol) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias.
- as(String, Metadata) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias with metadata.
- as(Encoder<U>) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset where each record has been mapped on to the specified type.
- as(String) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with an alias set.
- as(Symbol) - Method in class org.apache.spark.sql.Dataset
-
(Scala-specific) Returns a new Dataset with an alias set.
- as(Encoder<K>, Encoder<T>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Returns a KeyValueGroupedDataset
where the data is grouped by the grouping expressions
of current RelationalGroupedDataset
.
- asBinary() - Method in interface org.apache.spark.ml.classification.LogisticRegressionSummary
-
Convenient method for casting to binary logistic regression summary.
- asBinary() - Method in interface org.apache.spark.ml.classification.RandomForestClassificationSummary
-
Convenient method for casting to BinaryRandomForestClassificationSummary.
- asBreeze() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts to a breeze matrix.
- asBreeze() - Method in interface org.apache.spark.ml.linalg.Vector
-
Converts the instance to a breeze vector.
- asBreeze() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Converts to a breeze matrix.
- asBreeze() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Converts the instance to a breeze vector.
- asc() - Method in class org.apache.spark.sql.Column
-
Returns a sort expression based on ascending order of the column.
- asc(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on ascending order of the column.
- asc_nulls_first() - Method in class org.apache.spark.sql.Column
-
Returns a sort expression based on ascending order of the column,
and null values return before non-null values.
- asc_nulls_first(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on ascending order of the column,
and null values return before non-null values.
- asc_nulls_last() - Method in class org.apache.spark.sql.Column
-
Returns a sort expression based on ascending order of the column,
and null values appear after non-null values.
- asc_nulls_last(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on ascending order of the column,
and null values appear after non-null values.
- asCaseSensitiveMap() - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
-
Returns the original case-sensitive map.
- ascii(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the numeric value of the first character of the string column, and returns the
result as an int column.
- asin(Column) - Static method in class org.apache.spark.sql.functions
-
- asin(String) - Static method in class org.apache.spark.sql.functions
-
- asinh(Column) - Static method in class org.apache.spark.sql.functions
-
- asinh(String) - Static method in class org.apache.spark.sql.functions
-
- asInteraction() - Static method in class org.apache.spark.ml.feature.Dot
-
- asInteraction() - Method in interface org.apache.spark.ml.feature.InteractableTerm
-
Convert to ColumnInteraction to wrap all interactions.
- asIterator() - Method in class org.apache.spark.serializer.DeserializationStream
-
Read the elements of this stream through an iterator.
- asJavaPairRDD() - Method in class org.apache.spark.api.r.PairwiseRRDD
-
- asJavaRDD() - Method in class org.apache.spark.api.r.RRDD
-
- asJavaRDD() - Method in class org.apache.spark.api.r.StringRRDD
-
- ask(Object) - Method in interface org.apache.spark.api.plugin.PluginContext
-
Send an RPC to the plugin's driver-side component.
- asKeyValueIterator() - Method in class org.apache.spark.serializer.DeserializationStream
-
Read the elements of this stream through an iterator over key-value pairs.
- AskPermissionToCommitOutput - Class in org.apache.spark.scheduler
-
- AskPermissionToCommitOutput(int, int, int, int) - Constructor for class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- askRpcTimeout(SparkConf) - Static method in class org.apache.spark.util.RpcUtils
-
Returns the default Spark timeout to use for RPC ask operations.
- askStandaloneSchedulerToShutDownExecutorsError(Exception) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- askStorageEndpoints() - Method in class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
-
- askStorageEndpoints() - Method in class org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
-
- asML() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- asML() - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- asML() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Convert this matrix to the new mllib-local representation.
- asML() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- asML() - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- asML() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Convert this vector to the new mllib-local representation.
- asNondeterministic() - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Updates UserDefinedFunction to nondeterministic.
- asNonNullable() - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Updates UserDefinedFunction to non-nullable.
- asNullable() - Method in class org.apache.spark.sql.types.ObjectType
-
- asRDDId() - Method in class org.apache.spark.storage.BlockId
-
- assert_true(Column) - Static method in class org.apache.spark.sql.functions
-
Returns null if the condition is true, and throws an exception otherwise.
- assert_true(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns null if the condition is true; throws an exception with the error message otherwise.
- assertExceptionMsg(Throwable, String, boolean, ClassTag<E>) - Static method in class org.apache.spark.TestUtils
-
Asserts that exception message contains the message.
- assertNotSpilled(SparkContext, String, Function0<BoxedUnit>) - Static method in class org.apache.spark.TestUtils
-
Run some code involving jobs submitted to the given context and assert that the jobs
did not spill.
- assertSpilled(SparkContext, String, Function0<BoxedUnit>) - Static method in class org.apache.spark.TestUtils
-
Run some code involving jobs submitted to the given context and assert that the jobs spilled.
- assignClusters(Dataset<?>) - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
-
Run the PIC algorithm and returns a cluster assignment for each input vertex.
- assignedAddrs() - Method in interface org.apache.spark.resource.ResourceAllocator
-
Sequence of currently assigned resource addresses.
- Assignment(long, int) - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
-
- Assignment$() - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment$
-
- assignments() - Method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel
-
- AssociationRules - Class in org.apache.spark.ml.fpm
-
- AssociationRules() - Constructor for class org.apache.spark.ml.fpm.AssociationRules
-
- associationRules() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
-
Get association rules fitted using the minConfidence.
- AssociationRules - Class in org.apache.spark.mllib.fpm
-
Generates association rules from a RDD[FreqItemset[Item}
.
- AssociationRules() - Constructor for class org.apache.spark.mllib.fpm.AssociationRules
-
Constructs a default instance with default parameters {minConfidence = 0.8}.
- AssociationRules.Rule<Item> - Class in org.apache.spark.mllib.fpm
-
An association rule between sets of items.
- asTerms() - Static method in class org.apache.spark.ml.feature.Dot
-
- asTerms() - Static method in class org.apache.spark.ml.feature.EmptyTerm
-
- asTerms() - Method in interface org.apache.spark.ml.feature.Term
-
Default representation of a single Term as a part of summed terms.
- AsyncEventQueue - Class in org.apache.spark.scheduler
-
An asynchronous queue for events.
- AsyncEventQueue(String, SparkConf, LiveListenerBusMetrics, LiveListenerBus) - Constructor for class org.apache.spark.scheduler.AsyncEventQueue
-
- AsyncRDDActions<T> - Class in org.apache.spark.rdd
-
A set of asynchronous RDD actions available through an implicit conversion.
- AsyncRDDActions(RDD<T>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.AsyncRDDActions
-
- atan(Column) - Static method in class org.apache.spark.sql.functions
-
- atan(String) - Static method in class org.apache.spark.sql.functions
-
- atan2(Column, Column) - Static method in class org.apache.spark.sql.functions
-
- atan2(Column, String) - Static method in class org.apache.spark.sql.functions
-
- atan2(String, Column) - Static method in class org.apache.spark.sql.functions
-
- atan2(String, String) - Static method in class org.apache.spark.sql.functions
-
- atan2(Column, double) - Static method in class org.apache.spark.sql.functions
-
- atan2(String, double) - Static method in class org.apache.spark.sql.functions
-
- atan2(double, Column) - Static method in class org.apache.spark.sql.functions
-
- atan2(double, String) - Static method in class org.apache.spark.sql.functions
-
- atanh(Column) - Static method in class org.apache.spark.sql.functions
-
- atanh(String) - Static method in class org.apache.spark.sql.functions
-
- attempt() - Method in class org.apache.spark.status.api.v1.TaskData
-
- ATTEMPT() - Static method in class org.apache.spark.status.TaskIndexNames
-
- ATTEMPT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
- ATTEMPT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
- ATTEMPT_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
- ATTEMPT_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
- attemptId() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- attemptId() - Method in interface org.apache.spark.status.api.v1.BaseAppResource
-
- attemptId() - Method in class org.apache.spark.status.api.v1.StageData
-
- attemptNumber() - Method in class org.apache.spark.BarrierTaskContext
-
- attemptNumber() - Method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- attemptNumber() - Method in class org.apache.spark.scheduler.StageInfo
-
- attemptNumber() - Method in class org.apache.spark.scheduler.TaskInfo
-
- attemptNumber() - Method in class org.apache.spark.TaskCommitDenied
-
- attemptNumber() - Method in class org.apache.spark.TaskContext
-
How many times this task has been attempted.
- attempts() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
-
- ATTEMPTS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
- AtTimestamp(Date) - Constructor for class org.apache.spark.streaming.kinesis.KinesisInitialPositions.AtTimestamp
-
- attr() - Method in class org.apache.spark.graphx.Edge
-
- attr() - Method in class org.apache.spark.graphx.EdgeContext
-
The attribute associated with the edge.
- attr() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- Attribute - Class in org.apache.spark.ml.attribute
-
Abstract class for ML attributes.
- Attribute() - Constructor for class org.apache.spark.ml.attribute.Attribute
-
- attribute() - Method in class org.apache.spark.sql.sources.EqualNullSafe
-
- attribute() - Method in class org.apache.spark.sql.sources.EqualTo
-
- attribute() - Method in class org.apache.spark.sql.sources.GreaterThan
-
- attribute() - Method in class org.apache.spark.sql.sources.GreaterThanOrEqual
-
- attribute() - Method in class org.apache.spark.sql.sources.In
-
- attribute() - Method in class org.apache.spark.sql.sources.IsNotNull
-
- attribute() - Method in class org.apache.spark.sql.sources.IsNull
-
- attribute() - Method in class org.apache.spark.sql.sources.LessThan
-
- attribute() - Method in class org.apache.spark.sql.sources.LessThanOrEqual
-
- attribute() - Method in class org.apache.spark.sql.sources.StringContains
-
- attribute() - Method in class org.apache.spark.sql.sources.StringEndsWith
-
- attribute() - Method in class org.apache.spark.sql.sources.StringStartsWith
-
- AttributeFactory - Interface in org.apache.spark.ml.attribute
-
Trait for ML attribute factories.
- AttributeGroup - Class in org.apache.spark.ml.attribute
-
Attributes that describe a vector ML column.
- AttributeGroup(String) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
-
Creates an attribute group without attribute info.
- AttributeGroup(String, int) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
-
Creates an attribute group knowing only the number of attributes.
- AttributeGroup(String, Attribute[]) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
-
Creates an attribute group with attributes.
- AttributeKeys - Class in org.apache.spark.ml.attribute
-
Keys used to store attributes.
- AttributeKeys() - Constructor for class org.apache.spark.ml.attribute.AttributeKeys
-
- attributeNameSyntaxError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- attributes() - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Optional array of attributes.
- ATTRIBUTES() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
-
- attributes() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
-
- attributes() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
-
- attributes() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- ATTRIBUTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
- attributesForTypeUnsupportedError(ScalaReflection.Schema) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- AttributeType - Class in org.apache.spark.ml.attribute
-
An enum-like type for attribute types: AttributeType$.Numeric
, AttributeType$.Nominal
,
and AttributeType$.Binary
.
- AttributeType(String) - Constructor for class org.apache.spark.ml.attribute.AttributeType
-
- attrType() - Method in class org.apache.spark.ml.attribute.Attribute
-
Attribute type.
- attrType() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
-
- attrType() - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
- attrType() - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
- attrType() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
-
- available() - Method in class org.apache.spark.io.NioBufferedFileInputStream
-
- available() - Method in class org.apache.spark.io.ReadAheadInputStream
-
- available() - Method in class org.apache.spark.storage.BufferReleasingInputStream
-
- availableAddrs() - Method in interface org.apache.spark.resource.ResourceAllocator
-
Sequence of currently available resource addresses.
- AvailableNow() - Static method in class org.apache.spark.sql.streaming.Trigger
-
A trigger that processes all available data at the start of the query in one or multiple
batches, then terminates the query.
- Average() - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
-
- avg(MapFunction<T, Double>) - Static method in class org.apache.spark.sql.expressions.javalang.typed
-
Deprecated.
Average aggregate function.
- avg(Function1<IN, Object>) - Static method in class org.apache.spark.sql.expressions.scalalang.typed
-
Deprecated.
Average aggregate function.
- avg(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the average of the values in a group.
- avg(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the average of the values in a group.
- avg(String...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute the mean value for each numeric columns for each group.
- avg(Seq<String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute the mean value for each numeric columns for each group.
- avg() - Method in class org.apache.spark.util.DoubleAccumulator
-
Returns the average of elements added to the accumulator.
- avg() - Method in class org.apache.spark.util.LongAccumulator
-
Returns the average of elements added to the accumulator.
- avgEventRate() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
-
- avgInputRate() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- avgMetrics() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- avgProcessingTime() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- avgSchedulingDelay() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- avgTotalDelay() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- AvroMatchedField$() - Constructor for class org.apache.spark.sql.avro.AvroUtils.AvroMatchedField$
-
- AvroSchemaHelper(Schema, StructType, Seq<String>, Seq<String>, boolean) - Constructor for class org.apache.spark.sql.avro.AvroUtils.AvroSchemaHelper
-
- AvroUtils - Class in org.apache.spark.sql.avro
-
- AvroUtils() - Constructor for class org.apache.spark.sql.avro.AvroUtils
-
- AvroUtils.AvroMatchedField$ - Class in org.apache.spark.sql.avro
-
- AvroUtils.AvroSchemaHelper - Class in org.apache.spark.sql.avro
-
Helper class to perform field lookup/matching on Avro schemas.
- AvroUtils.RowReader - Interface in org.apache.spark.sql.avro
-
- awaitAnyTermination() - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Wait until any of the queries on the associated SQLContext has terminated since the
creation of the context, or since resetTerminated()
was called.
- awaitAnyTermination(long) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Wait until any of the queries on the associated SQLContext has terminated since the
creation of the context, or since resetTerminated()
was called.
- awaitReady(Awaitable<T>, Duration) - Static method in class org.apache.spark.util.ThreadUtils
-
Preferred alternative to Await.ready()
.
- awaitResult(Awaitable<T>, Duration) - Static method in class org.apache.spark.util.ThreadUtils
-
Preferred alternative to Await.result()
.
- awaitResult(Future<T>, Duration) - Static method in class org.apache.spark.util.ThreadUtils
-
- awaitTermination() - Method in interface org.apache.spark.sql.streaming.StreamingQuery
-
Waits for the termination of this
query, either by query.stop()
or by an exception.
- awaitTermination(long) - Method in interface org.apache.spark.sql.streaming.StreamingQuery
-
Waits for the termination of this
query, either by query.stop()
or by an exception.
- awaitTermination() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.
Wait for the execution to stop.
- awaitTermination() - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.
Wait for the execution to stop.
- awaitTerminationOrTimeout(long) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.
Wait for the execution to stop.
- awaitTerminationOrTimeout(long) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.
Wait for the execution to stop.
- axpy(double, Vector, Vector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
y += a * x
- axpy(double, Vector, Vector) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
y += a * x
- BACKUP_STANDALONE_MASTER_PREFIX() - Static method in class org.apache.spark.util.Utils
-
An identifier that backup masters use in their responses.
- balanceSlack() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
- barrier() - Method in class org.apache.spark.BarrierTaskContext
-
:: Experimental ::
Sets a global barrier and waits until all tasks in this stage hit this barrier.
- barrier() - Method in class org.apache.spark.rdd.RDD
-
:: Experimental ::
Marks the current stage as a barrier stage, where Spark must launch all tasks together.
- BARRIER() - Static method in class org.apache.spark.RequestMethod
-
- BARRIER_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
- BarrierCoordinatorMessage - Interface in org.apache.spark
-
- barrierStageWithDynamicAllocationError() - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- barrierStageWithRDDChainPatternError() - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- BarrierTaskContext - Class in org.apache.spark
-
:: Experimental ::
A
TaskContext
with extra contextual info and tooling for tasks in a barrier stage.
- BarrierTaskInfo - Class in org.apache.spark
-
:: Experimental ::
Carries all task infos of a barrier task.
- base64(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the BASE64 encoding of a binary column and returns it as a string column.
- BaseAppResource - Interface in org.apache.spark.status.api.v1
-
Base class for resource handlers that use app-specific data.
- baseOn(ParamPair<?>...) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Sets the given parameters in this grid to fixed values.
- baseOn(ParamMap) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Sets the given parameters in this grid to fixed values.
- baseOn(Seq<ParamPair<?>>) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Sets the given parameters in this grid to fixed values.
- BaseReadWrite - Interface in org.apache.spark.ml.util
-
Trait for MLWriter
and MLReader
.
- BaseRelation - Class in org.apache.spark.sql.sources
-
Represents a collection of tuples with a known schema.
- BaseRelation() - Constructor for class org.apache.spark.sql.sources.BaseRelation
-
- baseRelationToDataFrame(BaseRelation) - Method in class org.apache.spark.sql.SparkSession
-
Convert a BaseRelation
created for external data sources into a DataFrame
.
- baseRelationToDataFrame(BaseRelation) - Method in class org.apache.spark.sql.SQLContext
-
- BaseRRDD<T,U> - Class in org.apache.spark.api.r
-
- BaseRRDD(RDD<T>, int, byte[], String, String, byte[], Broadcast<Object>[], ClassTag<T>, ClassTag<U>) - Constructor for class org.apache.spark.api.r.BaseRRDD
-
- BaseStreamingAppResource - Interface in org.apache.spark.status.api.v1.streaming
-
Base class for streaming API handlers, provides easy access to the streaming listener that
holds the app's information.
- BasicBlockReplicationPolicy - Class in org.apache.spark.storage
-
- BasicBlockReplicationPolicy() - Constructor for class org.apache.spark.storage.BasicBlockReplicationPolicy
-
- basicCredentials(String, String) - Method in class org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
-
Use a basic AWS keypair for long-lived authorization.
- basicSparkPage(HttpServletRequest, Function0<Seq<Node>>, String, boolean) - Static method in class org.apache.spark.ui.UIUtils
-
Returns a page with the spark css/js and a simple format.
- BATCH_DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
- BATCH_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
- batchDuration() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
- batchDuration() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
-
- batchDuration() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- BATCHES() - Static method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
- batchId() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
- batchId() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
-
- BatchInfo - Class in org.apache.spark.status.api.v1.streaming
-
- BatchInfo - Class in org.apache.spark.streaming.scheduler
-
:: DeveloperApi ::
Class having information on completed batches.
- BatchInfo(Time, Map<Object, StreamInputInfo>, long, Option<Object>, Option<Object>, Map<Object, OutputOperationInfo>) - Constructor for class org.apache.spark.streaming.scheduler.BatchInfo
-
- batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
-
- batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
-
- batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
-
- batchInfos() - Method in class org.apache.spark.streaming.scheduler.StatsReportListener
-
- batchMetadataFileNotFoundError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- BatchStatus - Enum in org.apache.spark.status.api.v1.streaming
-
- batchTime() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
-
- batchTime() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
-
- batchTime() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
-
- batchWriteCapabilityError(Table, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- bbos() - Method in class org.apache.spark.storage.memory.SerializedValuesHolder
-
- bean(Class<T>) - Static method in class org.apache.spark.sql.Encoders
-
Creates an encoder for Java Bean of type T.
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
-
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
-
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.H2Dialect
-
- beforeFetch(Connection, Map<String, String>) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Override connection specific properties to run before a select is made.
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
-
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
-
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
-
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
-
- BernoulliCellSampler<T> - Class in org.apache.spark.util.random
-
:: DeveloperApi ::
A sampler based on Bernoulli trials for partitioning a data sequence.
- BernoulliCellSampler(double, double, boolean) - Constructor for class org.apache.spark.util.random.BernoulliCellSampler
-
- BernoulliSampler<T> - Class in org.apache.spark.util.random
-
:: DeveloperApi ::
A sampler based on Bernoulli trials.
- BernoulliSampler(double, ClassTag<T>) - Constructor for class org.apache.spark.util.random.BernoulliSampler
-
- bestModel() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- bestModel() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- beta() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
The beta value, which controls precision vs recall weighting,
used in "weightedFMeasure"
, "fMeasureByLabel"
.
- beta() - Method in class org.apache.spark.mllib.random.WeibullGenerator
-
- between(Object, Object) - Method in class org.apache.spark.sql.Column
-
True if the current column is between the lower bound and upper bound, inclusive.
- bin(Column) - Static method in class org.apache.spark.sql.functions
-
An expression that returns the string representation of the binary value of the given long
column.
- bin(String) - Static method in class org.apache.spark.sql.functions
-
An expression that returns the string representation of the binary value of the given long
column.
- Binarizer - Class in org.apache.spark.ml.feature
-
Binarize a column of continuous features given a threshold.
- Binarizer(String) - Constructor for class org.apache.spark.ml.feature.Binarizer
-
- Binarizer() - Constructor for class org.apache.spark.ml.feature.Binarizer
-
- Binary() - Static method in class org.apache.spark.ml.attribute.AttributeType
-
Binary type.
- binary() - Method in class org.apache.spark.ml.feature.CountVectorizer
-
- binary() - Method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- binary() - Method in interface org.apache.spark.ml.feature.CountVectorizerParams
-
Binary toggle to control the output vector values.
- binary() - Method in class org.apache.spark.ml.feature.HashingTF
-
Binary toggle to control term frequency counts.
- binary() - Method in class org.apache.spark.sql.ColumnName
-
Creates a new StructField
of type binary.
- BINARY() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for arrays of bytes.
- binaryArithmeticCauseOverflowError(short, String, short) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- BinaryAttribute - Class in org.apache.spark.ml.attribute
-
A binary attribute.
- BinaryClassificationEvaluator - Class in org.apache.spark.ml.evaluation
-
Evaluator for binary classification, which expects input columns rawPrediction, label and
an optional weight column.
- BinaryClassificationEvaluator(String) - Constructor for class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- BinaryClassificationEvaluator() - Constructor for class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- BinaryClassificationMetricComputer - Interface in org.apache.spark.mllib.evaluation.binary
-
Trait for a binary classification evaluation metric computer.
- BinaryClassificationMetrics - Class in org.apache.spark.mllib.evaluation
-
Evaluator for binary classification.
- BinaryClassificationMetrics(RDD<? extends Product>, int) - Constructor for class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
- BinaryClassificationMetrics(RDD<Tuple2<Object, Object>>) - Constructor for class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Defaults numBins
to 0.
- BinaryClassificationSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for binary classification results for a given model.
- binaryColumn(String[]) - Static method in class org.apache.parquet.filter2.predicate.SparkFilterApi
-
- BinaryConfusionMatrix - Interface in org.apache.spark.mllib.evaluation.binary
-
Trait for a binary confusion matrix.
- binaryFiles(String, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Read a directory of binary files from HDFS, a local file system (available on all nodes),
or any Hadoop-supported file system URI as a byte array.
- binaryFiles(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Read a directory of binary files from HDFS, a local file system (available on all nodes),
or any Hadoop-supported file system URI as a byte array.
- binaryFiles(String, int) - Method in class org.apache.spark.SparkContext
-
Get an RDD for a Hadoop-readable dataset as PortableDataStream for each file
(useful for binary data)
- binaryLabelValidator() - Static method in class org.apache.spark.mllib.util.DataValidators
-
Function to check if labels used for classification are either zero or one.
- BinaryLogisticRegressionSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for binary logistic regression results for a given model.
- BinaryLogisticRegressionSummaryImpl - Class in org.apache.spark.ml.classification
-
Binary logistic regression results for a given model.
- BinaryLogisticRegressionSummaryImpl(Dataset<Row>, String, String, String, String, String) - Constructor for class org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
-
- BinaryLogisticRegressionTrainingSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for binary logistic regression training results.
- BinaryLogisticRegressionTrainingSummaryImpl - Class in org.apache.spark.ml.classification
-
Binary logistic regression training results.
- BinaryLogisticRegressionTrainingSummaryImpl(Dataset<Row>, String, String, String, String, String, double[]) - Constructor for class org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummaryImpl
-
- BinaryRandomForestClassificationSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for BinaryRandomForestClassification results for a given model.
- BinaryRandomForestClassificationSummaryImpl - Class in org.apache.spark.ml.classification
-
Binary RandomForestClassification for a given model.
- BinaryRandomForestClassificationSummaryImpl(Dataset<Row>, String, String, String, String) - Constructor for class org.apache.spark.ml.classification.BinaryRandomForestClassificationSummaryImpl
-
- BinaryRandomForestClassificationTrainingSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for BinaryRandomForestClassification training results.
- BinaryRandomForestClassificationTrainingSummaryImpl - Class in org.apache.spark.ml.classification
-
Binary RandomForestClassification training results.
- BinaryRandomForestClassificationTrainingSummaryImpl(Dataset<Row>, String, String, String, String, double[]) - Constructor for class org.apache.spark.ml.classification.BinaryRandomForestClassificationTrainingSummaryImpl
-
- binaryRecords(String, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Load data from a flat binary file, assuming the length of each record is constant.
- binaryRecords(String, int, Configuration) - Method in class org.apache.spark.SparkContext
-
Load data from a flat binary file, assuming the length of each record is constant.
- binaryRecordsStream(String, int) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.
Create an input stream that monitors a Hadoop-compatible filesystem
for new files and reads them as flat binary files with fixed record lengths,
yielding byte arrays
- binaryRecordsStream(String, int) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.
Create an input stream that monitors a Hadoop-compatible filesystem
for new files and reads them as flat binary files, assuming a fixed length per record,
generating one byte array per record.
- BinarySample - Class in org.apache.spark.mllib.stat.test
-
Class that represents the group and value of a sample.
- BinarySample(boolean, double) - Constructor for class org.apache.spark.mllib.stat.test.BinarySample
-
- binarySummary() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
Gets summary of model on training set.
- binarySummary() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
Gets summary of model on training set.
- BinaryType - Class in org.apache.spark.sql.types
-
The data type representing Array[Byte]
values.
- BinaryType() - Constructor for class org.apache.spark.sql.types.BinaryType
-
- BinaryType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the BinaryType object.
- Binomial$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
-
- BinomialBounds - Class in org.apache.spark.util.random
-
Utility functions that help us determine bounds on adjusted sampling rate to guarantee exact
sample size with high confidence when sampling without replacement.
- BinomialBounds() - Constructor for class org.apache.spark.util.random.BinomialBounds
-
- BisectingKMeans - Class in org.apache.spark.ml.clustering
-
A bisecting k-means algorithm based on the paper "A comparison of document clustering techniques"
by Steinbach, Karypis, and Kumar, with modification to fit Spark.
- BisectingKMeans(String) - Constructor for class org.apache.spark.ml.clustering.BisectingKMeans
-
- BisectingKMeans() - Constructor for class org.apache.spark.ml.clustering.BisectingKMeans
-
- BisectingKMeans - Class in org.apache.spark.mllib.clustering
-
A bisecting k-means algorithm based on the paper "A comparison of document clustering techniques"
by Steinbach, Karypis, and Kumar, with modification to fit Spark.
- BisectingKMeans() - Constructor for class org.apache.spark.mllib.clustering.BisectingKMeans
-
Constructs with the default configuration
- BisectingKMeansModel - Class in org.apache.spark.ml.clustering
-
Model fitted by BisectingKMeans.
- BisectingKMeansModel - Class in org.apache.spark.mllib.clustering
-
- BisectingKMeansModel(ClusteringTreeNode) - Constructor for class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
- BisectingKMeansModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.clustering
-
- BisectingKMeansModel.SaveLoadV2_0$ - Class in org.apache.spark.mllib.clustering
-
- BisectingKMeansModel.SaveLoadV3_0$ - Class in org.apache.spark.mllib.clustering
-
- BisectingKMeansParams - Interface in org.apache.spark.ml.clustering
-
Common params for BisectingKMeans and BisectingKMeansModel
- BisectingKMeansSummary - Class in org.apache.spark.ml.clustering
-
Summary of BisectingKMeans.
- bit_length(Column) - Static method in class org.apache.spark.sql.functions
-
Calculates the bit length for the specified string column.
- bitSize() - Method in class org.apache.spark.util.sketch.BloomFilter
-
Returns the number of bits in the underlying bit array.
- bitwise_not(Column) - Static method in class org.apache.spark.sql.functions
-
Computes bitwise NOT (~) of a number.
- bitwiseAND(Object) - Method in class org.apache.spark.sql.Column
-
Compute bitwise AND of this expression with another expression.
- bitwiseNOT(Column) - Static method in class org.apache.spark.sql.functions
-
- bitwiseOR(Object) - Method in class org.apache.spark.sql.Column
-
Compute bitwise OR of this expression with another expression.
- bitwiseXOR(Object) - Method in class org.apache.spark.sql.Column
-
Compute bitwise XOR of this expression with another expression.
- BLACKLISTED_IN_STAGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
- blacklistedInStages() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- BLAS - Class in org.apache.spark.ml.linalg
-
BLAS routines for MLlib's vectors and matrices.
- BLAS() - Constructor for class org.apache.spark.ml.linalg.BLAS
-
- BLAS - Class in org.apache.spark.mllib.linalg
-
BLAS routines for MLlib's vectors and matrices.
- BLAS() - Constructor for class org.apache.spark.mllib.linalg.BLAS
-
- BLOCK_NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
-
- BlockData - Interface in org.apache.spark.storage
-
Abstracts away how blocks are stored and provides different ways to read the underlying block
data.
- blockDoesNotExistError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- blockedByLock() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
-
- blockedByThreadId() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
-
- BlockEvictionHandler - Interface in org.apache.spark.storage.memory
-
- BlockGeneratorListener - Interface in org.apache.spark.streaming.receiver
-
Listener object for BlockGenerator events
- blockHaveBeenRemovedError(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- BlockId - Class in org.apache.spark.storage
-
:: DeveloperApi ::
Identifies a particular Block of data, usually associated with a single file.
- BlockId() - Constructor for class org.apache.spark.storage.BlockId
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetLocations
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveBlock
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
-
- blockId() - Method in class org.apache.spark.storage.BlockUpdatedInfo
-
- blockId() - Method in interface org.apache.spark.streaming.receiver.ReceivedBlockStoreResult
-
- blockIds() - Method in class org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds
-
- BlockInfoWrapper - Class in org.apache.spark.storage
-
- BlockInfoWrapper(BlockInfo, Lock, Condition) - Constructor for class org.apache.spark.storage.BlockInfoWrapper
-
- BlockInfoWrapper(BlockInfo, Lock) - Constructor for class org.apache.spark.storage.BlockInfoWrapper
-
- BlockLocationsAndStatus(Seq<BlockManagerId>, BlockStatus, Option<String[]>) - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
-
- BlockLocationsAndStatus$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus$
-
- blockManager() - Method in class org.apache.spark.SparkEnv
-
- blockManagerAddedFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockManagerAddedToJson(SparkListenerBlockManagerAdded, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- BlockManagerHeartbeat(BlockManagerId) - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
-
- BlockManagerHeartbeat$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat$
-
- blockManagerId() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
-
- blockManagerId() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
-
- BlockManagerId - Class in org.apache.spark.storage
-
:: DeveloperApi ::
This class represent a unique identifier for a BlockManager.
- BlockManagerId() - Constructor for class org.apache.spark.storage.BlockManagerId
-
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
-
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetPeers
-
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetReplicateInfoForRDDBlocks
-
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
-
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
-
- blockManagerId() - Method in class org.apache.spark.storage.BlockUpdatedInfo
-
- blockManagerIdCache() - Static method in class org.apache.spark.storage.BlockManagerId
-
The max cache size is hardcoded to 10000, since the size of a BlockManagerId
object is about 48B, the total memory cost should be below 1MB which is feasible.
- blockManagerIdFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockManagerIdToJson(BlockManagerId, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- BlockManagerMessages - Class in org.apache.spark.storage
-
- BlockManagerMessages() - Constructor for class org.apache.spark.storage.BlockManagerMessages
-
- BlockManagerMessages.BlockLocationsAndStatus - Class in org.apache.spark.storage
-
The response message of GetLocationsAndStatus
request.
- BlockManagerMessages.BlockLocationsAndStatus$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.BlockManagerHeartbeat - Class in org.apache.spark.storage
-
- BlockManagerMessages.BlockManagerHeartbeat$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.DecommissionBlockManager$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.DecommissionBlockManagers - Class in org.apache.spark.storage
-
- BlockManagerMessages.DecommissionBlockManagers$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetBlockStatus - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetBlockStatus$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetExecutorEndpointRef - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetExecutorEndpointRef$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetLocations - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetLocations$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetLocationsAndStatus - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetLocationsAndStatus$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetLocationsMultipleBlockIds - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetLocationsMultipleBlockIds$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetMatchingBlockIds - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetMatchingBlockIds$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetMemoryStatus$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetPeers - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetPeers$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetReplicateInfoForRDDBlocks - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetReplicateInfoForRDDBlocks$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetShufflePushMergerLocations - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetShufflePushMergerLocations$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetStorageStatus$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.IsExecutorAlive - Class in org.apache.spark.storage
-
- BlockManagerMessages.IsExecutorAlive$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.RegisterBlockManager - Class in org.apache.spark.storage
-
- BlockManagerMessages.RegisterBlockManager$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveBlock - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveBlock$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveBroadcast - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveBroadcast$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveExecutor - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveExecutor$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveRdd - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveRdd$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveShuffle - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveShuffle$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveShufflePushMergerLocation - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveShufflePushMergerLocation$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.ReplicateBlock - Class in org.apache.spark.storage
-
- BlockManagerMessages.ReplicateBlock$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.StopBlockManagerMaster$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.ToBlockManagerMaster - Interface in org.apache.spark.storage
-
- BlockManagerMessages.ToBlockManagerMasterStorageEndpoint - Interface in org.apache.spark.storage
-
- BlockManagerMessages.TriggerThreadDump$ - Class in org.apache.spark.storage
-
Driver to Executor message to trigger a thread dump.
- BlockManagerMessages.UpdateBlockInfo - Class in org.apache.spark.storage
-
- BlockManagerMessages.UpdateBlockInfo$ - Class in org.apache.spark.storage
-
- blockManagerRemovedFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockManagerRemovedToJson(SparkListenerBlockManagerRemoved, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- BlockMatrix - Class in org.apache.spark.mllib.linalg.distributed
-
Represents a distributed matrix in blocks of local matrices.
- BlockMatrix(RDD<Tuple2<Tuple2<Object, Object>, Matrix>>, int, int, long, long) - Constructor for class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
- BlockMatrix(RDD<Tuple2<Tuple2<Object, Object>, Matrix>>, int, int) - Constructor for class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Alternate constructor for BlockMatrix without the input of the number of rows and columns.
- blockName() - Method in class org.apache.spark.status.api.v1.RDDPartitionInfo
-
- blockName() - Method in class org.apache.spark.status.LiveRDDPartition
-
- blockNotFoundError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- BlockNotFoundException - Exception in org.apache.spark.storage
-
- BlockNotFoundException(String) - Constructor for exception org.apache.spark.storage.BlockNotFoundException
-
- BlockReplicationPolicy - Interface in org.apache.spark.storage
-
::DeveloperApi::
BlockReplicationPrioritization provides logic for prioritizing a sequence of peers for
replicating blocks.
- BlockReplicationUtils - Class in org.apache.spark.storage
-
- BlockReplicationUtils() - Constructor for class org.apache.spark.storage.BlockReplicationUtils
-
- blocks() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
- blockSize() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- blockSize() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- blockSize() - Method in interface org.apache.spark.ml.param.shared.HasBlockSize
-
Param for block size for stacking input data in matrices.
- blockSize() - Method in class org.apache.spark.ml.recommendation.ALS
-
- blockSize() - Method in class org.apache.spark.ml.recommendation.ALSModel
-
- BlockStatus - Class in org.apache.spark.storage
-
- BlockStatus(StorageLevel, long, long) - Constructor for class org.apache.spark.storage.BlockStatus
-
- blockStatusFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockStatusQueryReturnedNullError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- blockStatusToJson(BlockStatus, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockUpdatedInfo() - Method in class org.apache.spark.scheduler.SparkListenerBlockUpdated
-
- BlockUpdatedInfo - Class in org.apache.spark.storage
-
:: DeveloperApi ::
Stores information about a block status in a block manager.
- BlockUpdatedInfo(BlockManagerId, BlockId, StorageLevel, long, long) - Constructor for class org.apache.spark.storage.BlockUpdatedInfo
-
- blockUpdatedInfoFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockUpdatedInfoToJson(BlockUpdatedInfo, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockUpdateFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockUpdateToJson(SparkListenerBlockUpdated, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- bloomFilter(String, long, double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Builds a Bloom filter over a specified column.
- bloomFilter(Column, long, double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Builds a Bloom filter over a specified column.
- bloomFilter(String, long, long) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Builds a Bloom filter over a specified column.
- bloomFilter(Column, long, long) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Builds a Bloom filter over a specified column.
- BloomFilter - Class in org.apache.spark.util.sketch
-
A Bloom filter is a space-efficient probabilistic data structure that offers an approximate
containment test with one-sided error: if it claims that an item is contained in it, this
might be in error, but if it claims that an item is not contained in it, then this is
definitely true.
- BloomFilter() - Constructor for class org.apache.spark.util.sketch.BloomFilter
-
- BloomFilter.Version - Enum in org.apache.spark.util.sketch
-
- bmAddress() - Method in class org.apache.spark.FetchFailed
-
- BOOLEAN() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable boolean type.
- booleanColumn(String[]) - Static method in class org.apache.parquet.filter2.predicate.SparkFilterApi
-
- BooleanParam - Class in org.apache.spark.ml.param
-
Specialized version of Param[Boolean]
for Java.
- BooleanParam(String, String, String) - Constructor for class org.apache.spark.ml.param.BooleanParam
-
- BooleanParam(Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.BooleanParam
-
- BooleanType - Class in org.apache.spark.sql.types
-
The data type representing Boolean
values.
- BooleanType() - Constructor for class org.apache.spark.sql.types.BooleanType
-
- BooleanType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the BooleanType object.
- boost(RDD<org.apache.spark.ml.feature.Instance>, RDD<org.apache.spark.ml.feature.Instance>, BoostingStrategy, boolean, long, String, Option<org.apache.spark.ml.util.Instrumentation>) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Internal method for performing regression using trees as base learners.
- BoostingStrategy - Class in org.apache.spark.mllib.tree.configuration
-
- BoostingStrategy(Strategy, Loss, int, double, double) - Constructor for class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- bootstrap() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- bootstrap() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- bootstrap() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- bootstrap() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- bootstrap() - Method in interface org.apache.spark.ml.tree.RandomForestParams
-
Whether bootstrap samples are used when building trees.
- Both() - Static method in class org.apache.spark.graphx.EdgeDirection
-
Edges originating from *and* arriving at a vertex of interest.
- boundaries() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
Boundaries in increasing order for which predictions are known.
- boundaries() - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
-
- BoundedDouble - Class in org.apache.spark.partial
-
A Double value with error bars and associated confidence.
- BoundedDouble(double, double, double, double) - Constructor for class org.apache.spark.partial.BoundedDouble
-
- BreezeUtil - Class in org.apache.spark.ml.ann
-
In-place DGEMM and DGEMV for Breeze
- BreezeUtil() - Constructor for class org.apache.spark.ml.ann.BreezeUtil
-
- broadcast(T) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Broadcast a read-only variable to the cluster, returning a
Broadcast
object for reading it in distributed functions.
- Broadcast<T> - Class in org.apache.spark.broadcast
-
A broadcast variable.
- Broadcast(long, ClassTag<T>) - Constructor for class org.apache.spark.broadcast.Broadcast
-
- broadcast(T, ClassTag<T>) - Method in class org.apache.spark.SparkContext
-
Broadcast a read-only variable to the cluster, returning a
Broadcast
object for reading it in distributed functions.
- broadcast(Dataset<T>) - Static method in class org.apache.spark.sql.functions
-
Marks a DataFrame as small enough for use in broadcast joins.
- BROADCAST() - Static method in class org.apache.spark.storage.BlockId
-
- BroadcastBlockId - Class in org.apache.spark.storage
-
- BroadcastBlockId(long, String) - Constructor for class org.apache.spark.storage.BroadcastBlockId
-
- broadcastCleaned(long) - Method in interface org.apache.spark.CleanerListener
-
- BroadcastFactory - Interface in org.apache.spark.broadcast
-
An interface for all the broadcast implementations in Spark (to allow
multiple broadcast implementations).
- broadcastId() - Method in class org.apache.spark.CleanBroadcast
-
- broadcastId() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
-
- broadcastId() - Method in class org.apache.spark.storage.BroadcastBlockId
-
- broadcastManager() - Method in class org.apache.spark.SparkEnv
-
- bround(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the column e
rounded to 0 decimal places with HALF_EVEN round mode.
- bround(Column, int) - Static method in class org.apache.spark.sql.functions
-
Round the value of e
to scale
decimal places with HALF_EVEN round mode
if scale
is greater than or equal to 0 or at integral part when scale
is less than 0.
- bucket(Column, Column) - Static method in class org.apache.spark.sql.functions
-
A transform for any type that partitions by a hash of the input column.
- bucket(int, Column) - Static method in class org.apache.spark.sql.functions
-
A transform for any type that partitions by a hash of the input column.
- bucketBy(int, String, String...) - Method in class org.apache.spark.sql.DataFrameWriter
-
Buckets the output by the given columns.
- bucketBy(int, String, Seq<String>) - Method in class org.apache.spark.sql.DataFrameWriter
-
Buckets the output by the given columns.
- bucketByAndSortByUnsupportedByOperationError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- bucketByUnsupportedByOperationError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- BucketedRandomProjectionLSH - Class in org.apache.spark.ml.feature
-
- BucketedRandomProjectionLSH(String) - Constructor for class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- BucketedRandomProjectionLSH() - Constructor for class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- BucketedRandomProjectionLSHModel - Class in org.apache.spark.ml.feature
-
- BucketedRandomProjectionLSHParams - Interface in org.apache.spark.ml.feature
-
- bucketingColumnCannotBePartOfPartitionColumnsError(String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- Bucketizer - Class in org.apache.spark.ml.feature
-
Bucketizer
maps a column of continuous features to a column of feature buckets.
- Bucketizer(String) - Constructor for class org.apache.spark.ml.feature.Bucketizer
-
- Bucketizer() - Constructor for class org.apache.spark.ml.feature.Bucketizer
-
- bucketLength() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- bucketLength() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- bucketLength() - Method in interface org.apache.spark.ml.feature.BucketedRandomProjectionLSHParams
-
The length of each hash bucket, a larger bucket lowers the false negative rate.
- bucketSortingColumnCannotBePartOfPartitionColumnsError(String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- buffer() - Method in class org.apache.spark.storage.memory.SerializedMemoryEntry
-
- bufferEncoder() - Method in class org.apache.spark.ml.feature.StringIndexerAggregator
-
- bufferEncoder() - Method in class org.apache.spark.sql.expressions.Aggregator
-
Specifies the Encoder
for the intermediate value type.
- BufferReleasingInputStream - Class in org.apache.spark.storage
-
Helper class that ensures a ManagedBuffer is released upon InputStream.close() and
also detects stream corruption if streamCompressedOrEncrypted is true
- BufferReleasingInputStream(InputStream, ShuffleBlockFetcherIterator, BlockId, int, BlockManagerId, boolean, boolean, Option<CheckedInputStream>) - Constructor for class org.apache.spark.storage.BufferReleasingInputStream
-
- bufferSchema() - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.
A StructType
represents data types of values in the aggregation buffer.
- build(Node, int) - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData$
-
- build(DecisionTreeModel, int) - Method in class org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData$
-
- build() - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Builds and returns all combinations of parameters specified by the param grid.
- build() - Method in class org.apache.spark.resource.ResourceProfileBuilder
-
- build() - Method in class org.apache.spark.sql.types.MetadataBuilder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
- build() - Method in interface org.apache.spark.storage.memory.MemoryEntryBuilder
-
- build() - Method in class org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
-
- builder() - Static method in class org.apache.spark.sql.SparkSession
-
- Builder() - Constructor for class org.apache.spark.sql.SparkSession.Builder
-
- Builder() - Constructor for class org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
-
- buildErrorResponse(Response.Status, String) - Static method in class org.apache.spark.ui.UIUtils
-
- buildFilter(Seq<Expression>, Seq<Attribute>) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
-
Builds a function that can be used to filter batches prior to being decompressed.
- buildFilter(Seq<Expression>, Seq<Attribute>) - Method in class org.apache.spark.sql.columnar.SimpleMetricsCachedBatchSerializer
-
- buildLocationMetadata(Seq<Path>, int) - Static method in class org.apache.spark.util.Utils
-
Convert a sequence of Path
s to a metadata string.
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
- buildPlanNormalizationRules(SparkSession) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
- buildPools() - Method in interface org.apache.spark.scheduler.SchedulableBuilder
-
- buildReaderUnsupportedForFileFormatError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- buildScan(Seq<Attribute>, Seq<Expression>) - Method in interface org.apache.spark.sql.sources.CatalystScan
-
- buildScan(String[], Filter[]) - Method in interface org.apache.spark.sql.sources.PrunedFilteredScan
-
- buildScan(String[]) - Method in interface org.apache.spark.sql.sources.PrunedScan
-
- buildScan() - Method in interface org.apache.spark.sql.sources.TableScan
-
- buildTreeFromNodes(DecisionTreeModelReadWrite.NodeData[], String) - Static method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite
-
Given all data for all nodes in a tree, rebuild the tree.
- BYTE() - Static method in class org.apache.spark.api.r.SerializationFormats
-
- BYTE() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable byte type.
- BytecodeUtils - Class in org.apache.spark.graphx.util
-
Includes an utility function to test whether a function accesses a specific attribute
of an object.
- BytecodeUtils() - Constructor for class org.apache.spark.graphx.util.BytecodeUtils
-
- ByteExactNumeric - Class in org.apache.spark.sql.types
-
- ByteExactNumeric() - Constructor for class org.apache.spark.sql.types.ByteExactNumeric
-
- BYTES_READ() - Method in class org.apache.spark.InternalAccumulator.input$
-
- BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
- BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
-
- BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
-
- BYTES_WRITTEN() - Method in class org.apache.spark.InternalAccumulator.output$
-
- BYTES_WRITTEN() - Method in class org.apache.spark.InternalAccumulator.shuffleWrite$
-
- BYTES_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
- BYTES_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
-
- BYTES_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
-
- BYTES_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
-
- bytesRead() - Method in class org.apache.spark.status.api.v1.InputMetricDistributions
-
- bytesRead() - Method in class org.apache.spark.status.api.v1.InputMetrics
-
- bytesToString(long) - Static method in class org.apache.spark.util.Utils
-
Convert a quantity in bytes to a human-readable string such as "4.0 MiB".
- bytesToString(BigInt) - Static method in class org.apache.spark.util.Utils
-
- byteStringAsBytes(String) - Static method in class org.apache.spark.util.Utils
-
Convert a passed byte string (e.g.
- byteStringAsGb(String) - Static method in class org.apache.spark.util.Utils
-
Convert a passed byte string (e.g.
- byteStringAsKb(String) - Static method in class org.apache.spark.util.Utils
-
Convert a passed byte string (e.g.
- byteStringAsMb(String) - Static method in class org.apache.spark.util.Utils
-
Convert a passed byte string (e.g.
- bytesWritten() - Method in class org.apache.spark.status.api.v1.OutputMetricDistributions
-
- bytesWritten() - Method in class org.apache.spark.status.api.v1.OutputMetrics
-
- bytesWritten() - Method in class org.apache.spark.status.api.v1.ShuffleWriteMetrics
-
- bytesWritten(long) - Method in interface org.apache.spark.util.logging.RollingPolicy
-
Notify that bytes have been written
- ByteType - Class in org.apache.spark.sql.types
-
The data type representing Byte
values.
- ByteType() - Constructor for class org.apache.spark.sql.types.ByteType
-
- ByteType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the ByteType object.
- cache() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Persist this RDD with the default storage level (MEMORY_ONLY
).
- cache() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Persist this RDD with the default storage level (MEMORY_ONLY
).
- cache() - Method in class org.apache.spark.api.java.JavaRDD
-
Persist this RDD with the default storage level (MEMORY_ONLY
).
- cache() - Method in class org.apache.spark.graphx.Graph
-
Caches the vertices and edges associated with this graph at the previously-specified target
storage levels, which default to MEMORY_ONLY
.
- cache() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
Persists the edge partitions using targetStorageLevel
, which defaults to MEMORY_ONLY.
- cache() - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- cache() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
Persists the vertex partitions at targetStorageLevel
, which defaults to MEMORY_ONLY.
- cache() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Caches the underlying RDD.
- cache() - Method in class org.apache.spark.rdd.RDD
-
Persist this RDD with the default storage level (MEMORY_ONLY
).
- cache() - Method in class org.apache.spark.sql.Dataset
-
Persist this Dataset with the default storage level (MEMORY_AND_DISK
).
- cache() - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- cache() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- cache() - Method in class org.apache.spark.streaming.dstream.DStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- CACHED_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
- CACHED_PARTITIONS() - Static method in class org.apache.spark.ui.storage.ToolTips
-
- CachedBatch - Interface in org.apache.spark.sql.columnar
-
Basic interface that all cached batches of data must support.
- CachedBatchSerializer - Interface in org.apache.spark.sql.columnar
-
Provides APIs that handle transformations of SQL data associated with the cache/persist APIs.
- cacheNodeIds() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- cacheNodeIds() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- cacheNodeIds() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- cacheNodeIds() - Method in class org.apache.spark.ml.classification.GBTClassifier
-
- cacheNodeIds() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- cacheNodeIds() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- cacheNodeIds() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- cacheNodeIds() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- cacheNodeIds() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- cacheNodeIds() - Method in class org.apache.spark.ml.regression.GBTRegressor
-
- cacheNodeIds() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- cacheNodeIds() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- cacheNodeIds() - Method in interface org.apache.spark.ml.tree.DecisionTreeParams
-
If false, the algorithm will pass trees to executors to match instances with nodes.
- cacheSize() - Method in interface org.apache.spark.SparkExecutorInfo
-
- cacheSize() - Method in class org.apache.spark.SparkExecutorInfoImpl
-
- cacheTable(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Caches the specified table in-memory.
- cacheTable(String, StorageLevel) - Method in class org.apache.spark.sql.catalog.Catalog
-
Caches the specified table with the given storage level.
- cacheTable(String) - Method in class org.apache.spark.sql.SQLContext
-
Caches the specified table in-memory.
- calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Entropy
-
information calculation for multiclass classification
- calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Entropy
-
variance calculation
- calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Gini
-
information calculation for multiclass classification
- calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Gini
-
variance calculation
- calculate(double[], double) - Method in interface org.apache.spark.mllib.tree.impurity.Impurity
-
information calculation for multiclass classification
- calculate(double, double, double) - Method in interface org.apache.spark.mllib.tree.impurity.Impurity
-
information calculation for regression
- calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Variance
-
information calculation for multiclass classification
- calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Variance
-
variance calculation
- calculateAmountAndPartsForFraction(double) - Static method in class org.apache.spark.resource.ResourceUtils
-
- calculateNumberOfPartitions(long, int, int) - Method in class org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter$
-
Calculate the number of partitions to use in saving the model.
- CalendarInterval - Class in org.apache.spark.unsafe.types
-
The class representing calendar intervals.
- CalendarInterval(int, int, long) - Constructor for class org.apache.spark.unsafe.types.CalendarInterval
-
- CalendarIntervalType - Class in org.apache.spark.sql.types
-
The data type representing calendar intervals.
- CalendarIntervalType() - Constructor for class org.apache.spark.sql.types.CalendarIntervalType
-
- CalendarIntervalType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the CalendarIntervalType object.
- call(K, Iterator<V1>, Iterator<V2>) - Method in interface org.apache.spark.api.java.function.CoGroupFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.DoubleFlatMapFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.DoubleFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.FilterFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.FlatMapFunction
-
- call(T1, T2) - Method in interface org.apache.spark.api.java.function.FlatMapFunction2
-
- call(K, Iterator<V>) - Method in interface org.apache.spark.api.java.function.FlatMapGroupsFunction
-
- call(K, Iterator<V>, GroupState<S>) - Method in interface org.apache.spark.api.java.function.FlatMapGroupsWithStateFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.ForeachFunction
-
- call(Iterator<T>) - Method in interface org.apache.spark.api.java.function.ForeachPartitionFunction
-
- call(T1) - Method in interface org.apache.spark.api.java.function.Function
-
- call() - Method in interface org.apache.spark.api.java.function.Function0
-
- call(T1, T2) - Method in interface org.apache.spark.api.java.function.Function2
-
- call(T1, T2, T3) - Method in interface org.apache.spark.api.java.function.Function3
-
- call(T1, T2, T3, T4) - Method in interface org.apache.spark.api.java.function.Function4
-
- call(T) - Method in interface org.apache.spark.api.java.function.MapFunction
-
- call(K, Iterator<V>) - Method in interface org.apache.spark.api.java.function.MapGroupsFunction
-
- call(K, Iterator<V>, GroupState<S>) - Method in interface org.apache.spark.api.java.function.MapGroupsWithStateFunction
-
- call(Iterator<T>) - Method in interface org.apache.spark.api.java.function.MapPartitionsFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.PairFlatMapFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.PairFunction
-
- call(T, T) - Method in interface org.apache.spark.api.java.function.ReduceFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.VoidFunction
-
- call(T1, T2) - Method in interface org.apache.spark.api.java.function.VoidFunction2
-
- call() - Method in interface org.apache.spark.sql.api.java.UDF0
-
- call(T1) - Method in interface org.apache.spark.sql.api.java.UDF1
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10) - Method in interface org.apache.spark.sql.api.java.UDF10
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11) - Method in interface org.apache.spark.sql.api.java.UDF11
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12) - Method in interface org.apache.spark.sql.api.java.UDF12
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13) - Method in interface org.apache.spark.sql.api.java.UDF13
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14) - Method in interface org.apache.spark.sql.api.java.UDF14
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15) - Method in interface org.apache.spark.sql.api.java.UDF15
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16) - Method in interface org.apache.spark.sql.api.java.UDF16
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17) - Method in interface org.apache.spark.sql.api.java.UDF17
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18) - Method in interface org.apache.spark.sql.api.java.UDF18
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19) - Method in interface org.apache.spark.sql.api.java.UDF19
-
- call(T1, T2) - Method in interface org.apache.spark.sql.api.java.UDF2
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20) - Method in interface org.apache.spark.sql.api.java.UDF20
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, T21) - Method in interface org.apache.spark.sql.api.java.UDF21
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22) - Method in interface org.apache.spark.sql.api.java.UDF22
-
- call(T1, T2, T3) - Method in interface org.apache.spark.sql.api.java.UDF3
-
- call(T1, T2, T3, T4) - Method in interface org.apache.spark.sql.api.java.UDF4
-
- call(T1, T2, T3, T4, T5) - Method in interface org.apache.spark.sql.api.java.UDF5
-
- call(T1, T2, T3, T4, T5, T6) - Method in interface org.apache.spark.sql.api.java.UDF6
-
- call(T1, T2, T3, T4, T5, T6, T7) - Method in interface org.apache.spark.sql.api.java.UDF7
-
- call(T1, T2, T3, T4, T5, T6, T7, T8) - Method in interface org.apache.spark.sql.api.java.UDF8
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9) - Method in interface org.apache.spark.sql.api.java.UDF9
-
- call_udf(String, Column...) - Static method in class org.apache.spark.sql.functions
-
Call an user-defined function.
- call_udf(String, Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Call an user-defined function.
- callSite() - Method in class org.apache.spark.storage.RDDInfo
-
- CALLSITE_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
- callUDF(String, Column...) - Static method in class org.apache.spark.sql.functions
-
Call an user-defined function.
- callUDF(String, Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
- cancel() - Method in class org.apache.spark.ComplexFutureAction
-
- cancel() - Method in interface org.apache.spark.FutureAction
-
Cancels the execution of this action.
- cancel() - Method in class org.apache.spark.SimpleFutureAction
-
- cancelAllJobs() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Cancel all jobs that have been scheduled or are running.
- cancelAllJobs() - Method in class org.apache.spark.SparkContext
-
Cancel all jobs that have been scheduled or are running.
- cancelJob(int, String) - Method in class org.apache.spark.SparkContext
-
Cancel a given job if it's scheduled or running.
- cancelJob(int) - Method in class org.apache.spark.SparkContext
-
Cancel a given job if it's scheduled or running.
- cancelJobGroup(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Cancel active jobs for the specified group.
- cancelJobGroup(String) - Method in class org.apache.spark.SparkContext
-
Cancel active jobs for the specified group.
- cancelStage(int, String) - Method in class org.apache.spark.SparkContext
-
Cancel a given stage and all jobs associated with it.
- cancelStage(int) - Method in class org.apache.spark.SparkContext
-
Cancel a given stage and all jobs associated with it.
- cancelTasks(int, boolean) - Method in interface org.apache.spark.scheduler.TaskScheduler
-
- canCreate(String) - Method in interface org.apache.spark.scheduler.ExternalClusterManager
-
Check if this cluster manager instance can create scheduler components
for a certain master URL.
- canEqual(Object) - Static method in class org.apache.spark.ExpireDeadHosts
-
- canEqual(Object) - Static method in class org.apache.spark.metrics.DirectPoolMemory
-
- canEqual(Object) - Static method in class org.apache.spark.metrics.GarbageCollectionMetrics
-
- canEqual(Object) - Static method in class org.apache.spark.metrics.JVMHeapMemory
-
- canEqual(Object) - Static method in class org.apache.spark.metrics.JVMOffHeapMemory
-
- canEqual(Object) - Static method in class org.apache.spark.metrics.MappedPoolMemory
-
- canEqual(Object) - Static method in class org.apache.spark.metrics.OffHeapExecutionMemory
-
- canEqual(Object) - Static method in class org.apache.spark.metrics.OffHeapStorageMemory
-
- canEqual(Object) - Static method in class org.apache.spark.metrics.OffHeapUnifiedMemory
-
- canEqual(Object) - Static method in class org.apache.spark.metrics.OnHeapExecutionMemory
-
- canEqual(Object) - Static method in class org.apache.spark.metrics.OnHeapStorageMemory
-
- canEqual(Object) - Static method in class org.apache.spark.metrics.OnHeapUnifiedMemory
-
- canEqual(Object) - Static method in class org.apache.spark.metrics.ProcessTreeMetrics
-
- canEqual(Object) - Static method in class org.apache.spark.ml.feature.Dot
-
- canEqual(Object) - Static method in class org.apache.spark.ml.feature.EmptyTerm
-
- canEqual(Object) - Static method in class org.apache.spark.Resubmitted
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.AllJobsCancelled
-
- canEqual(Object) - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.JobSucceeded
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.StopCoordinator
-
- canEqual(Object) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- canEqual(Object) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- canEqual(Object) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.AlwaysFalse
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.AlwaysTrue
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.BinaryType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.BooleanType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.ByteType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.CalendarIntervalType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.DateType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.DoubleType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.FloatType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.IntegerType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.LongType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.NullType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.ShortType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.StringType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.TimestampNTZType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.TimestampType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
-
- canEqual(Object) - Static method in class org.apache.spark.StopMapOutputTracker
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
-
- canEqual(Object) - Static method in class org.apache.spark.Success
-
- canEqual(Object) - Static method in class org.apache.spark.TaskResultLost
-
- canEqual(Object) - Static method in class org.apache.spark.TaskSchedulerIsSet
-
- canEqual(Object) - Static method in class org.apache.spark.UnknownReason
-
- canEqual(Object) - Method in class org.apache.spark.util.MutablePair
-
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
-
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
-
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
-
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.H2Dialect
-
- canHandle(Driver, Map<String, String>) - Method in class org.apache.spark.sql.jdbc.JdbcConnectionProvider
-
Checks if this connection provider instance can handle the connection initiated by the driver.
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Check if this dialect instance can handle a certain jdbc url.
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
-
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
-
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
-
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
-
- cannotAcquireMemoryToBuildLongHashedRelationError(long, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotAcquireMemoryToBuildUnsafeHashedRelationError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotAddMultiPartitionsOnNonatomicPartitionTableError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotAllocateMemoryToGrowBytesToBytesMapError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotAlterTableWithAlterViewError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotAlterViewWithAlterTableError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotBroadcastTableOverMaxTableBytesError(long, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotBroadcastTableOverMaxTableRowsError(long, long) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotBuildHashedRelationLargerThan8GError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotBuildHashedRelationWithUniqueKeysExceededError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotCastError(DataType, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotCastFromNullTypeError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotChangeDecimalPrecisionError(Decimal, int, int, SQLQueryContext) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotChangeStorageLevelError() - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- cannotCleanReservedNamespacePropertyError(String, ParserRuleContext, String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
-
- cannotCleanReservedTablePropertyError(String, ParserRuleContext, String) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
-
- cannotClearOutputDirectoryError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotClearPartitionDirectoryError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotCloneOrCopyReadOnlySQLConfError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotCompareCostWithTargetCostError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotConvertCatalystTypeToProtobufEnumTypeError(Seq<String>, String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotConvertCatalystTypeToProtobufTypeError(Seq<String>, String, DataType, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotConvertDataTypeToParquetTypeError(StructField) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotConvertOrcTimestampNTZToTimestampLTZError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotConvertOrcTimestampToTimestampNTZError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotConvertProtobufTypeToCatalystTypeError(String, DataType, Throwable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotConvertProtobufTypeToSqlTypeError(String, Seq<String>, String, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotConvertSqlTypeToProtobufError(String, DataType, Throwable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotCreateArrayWithElementsExceedLimitError(long, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotCreateColumnarReaderError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotCreateDatabaseWithSameNameAsPreservedDatabaseError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotCreateJDBCNamespaceUsingProviderError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotCreateJDBCNamespaceWithPropertyError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotCreateJDBCTableUsingLocationError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotCreateJDBCTableUsingProviderError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotCreateJDBCTableWithPartitionsError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotCreateParquetConverterForDataTypeError(DataType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotCreateParquetConverterForDecimalTypeError(DecimalType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotCreateParquetConverterForTypeError(DecimalType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotCreateStagingDirError(String, IOException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotCreateTableWithBothProviderAndSerdeError(Option<String>, Option<SerdeInfo>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotCreateTempViewUsingHiveDataSourceError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotDeleteTableWhereFiltersError(Table, Predicate[]) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotDropBuiltinFuncError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotDropDefaultDatabaseError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotDropMultiPartitionsOnNonatomicPartitionTableError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotDropNonemptyDatabaseError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotDropNonemptyNamespaceError(Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotEvaluateExpressionError(Expression) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotExecuteStreamingRelationExecError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotFetchTablesOfDatabaseError(String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotFindCatalogToHandleIdentifierError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotFindCatalystTypeInProtobufSchemaError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotFindColumnError(String, String[]) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotFindColumnInRelationOutputError(String, LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotFindConstructorForTypeError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotFindDescriptorFileError(String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotFindEncoderForTypeError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotFindPartitionColumnInPartitionSchemaError(StructField, StructType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotFindProtobufFieldInCatalystError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotGenerateCodeForExpressionError(Expression) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotGenerateCodeForIncomparableTypeError(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotGetEventTimeWatermarkError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotGetJdbcTypeError(DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotGetOuterPointerForInnerClassError(Class<?>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotGetSQLConfInSchedulerEventLoopThreadError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotHaveCircularReferencesInBeanClassError(Class<?>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotHaveCircularReferencesInClassError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotInstantiateAbstractCatalogPluginClassError(String, String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotInterpolateClassIntoCodeBlockError(Object) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotLoadClassNotOnClassPathError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotLoadClassWhenRegisteringFunctionError(String, FunctionIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotLoadUserDefinedTypeError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotMergeClassWithOtherClassError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotMergeDecimalTypesWithIncompatibleScaleError(int, int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotMergeIncompatibleDataTypesError(DataType, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotModifyValueOfSparkConfigError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotModifyValueOfStaticConfigError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotMutateReadOnlySQLConfError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotOperateOnHiveDataSourceFilesError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotOverwritePathBeingReadFromError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotOverwriteTableThatIsBeingReadFromError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotParseDecimalError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotParseIntervalError(String, Throwable) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotParseJsonArraysAsStructsError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotParseJSONFieldError(JsonParser, JsonToken, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotParseStatisticAsPercentileError(String, NumberFormatException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotParseStringAsDataTypeError(JsonParser, JsonToken, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotParseStringAsDataTypeError(String, String, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotParseValueTypeError(String, String, SqlBaseParser.TypeConstructorContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
-
- cannotPassTypedColumnInUntypedSelectError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotPurgeAsBreakInternalStateError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotReadCorruptedTablePropertyError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotReadFilesError(Throwable, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotReadFooterForFileError(Path, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotRecognizeHiveTypeError(ParseException, String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotRefreshBuiltInFuncError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotRefreshTempFuncError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotRemovePartitionDirError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotRemoveReservedPropertyError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotRenameTableWithAlterViewError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotRenameTempViewToExistingTableError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotRenameTempViewWithDatabaseSpecifiedError(TableIdentifier, TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotReplaceMissingTableError(Identifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotReplaceMissingTableError(Identifier, Option<Throwable>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotResolveAttributeError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotResolveColumnGivenInputColumnsError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotResolveColumnNameAmongAttributesError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotResolveStarExpandGivenInputColumnsError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotResolveWindowReferenceError(String, SqlBaseParser.WindowClauseContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
-
- cannotRestorePermissionsForPathError(FsPermission, Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotRetrieveTableOrViewNotInSameDatabaseError(Seq<QualifiedTableName>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotRunSubmitMapStageOnZeroPartitionRDDError() - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- cannotSafelyMergeSerdePropertiesError(Map<String, String>, Map<String, String>, Set<String>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotSaveBlockOnDecommissionedExecutorError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- cannotSaveIntervalIntoExternalStorageError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotSetJDBCNamespaceWithPropertyError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotSetTimeoutDurationError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotSetTimeoutTimestampError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotSpecifyBothJdbcTableNameAndQueryError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotSpecifyDatabaseForTempViewError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotSpecifyWindowFrameError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotTerminateGeneratorError(UnresolvedGenerator) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotTranslateExpressionToSourceFilterError(Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotTranslateNonNullValueForFieldError(int) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotUnsetJDBCNamespaceWithPropertyError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotUseAllColumnsForPartitionColumnsError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotUseDataTypeForPartitionColumnError(StructField) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotUseIntervalTypeInTableSchemaError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotUseInvalidJavaIdentifierAsFieldNameError(String, WalkedTypePath) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- cannotUseMapSideCombiningWithArrayKeyError() - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- cannotUsePreservedDatabaseAsCurrentDatabaseError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotWriteDataToRelationsWithMultiplePathsError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotWriteIncompatibleDataToTableError(String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotWriteNotEnoughColumnsToTableError(String, Seq<Attribute>, LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- cannotWriteTooManyColumnsToTableError(String, Seq<Attribute>, LogicalPlan) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- CanonicalRandomVertexCut$() - Constructor for class org.apache.spark.graphx.PartitionStrategy.CanonicalRandomVertexCut$
-
- canOnlyZipRDDsWithSamePartitionSizeError() - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- canWrite(DataType, DataType, boolean, Function2<String, String, Object>, String, Enumeration.Value, Function1<String, BoxedUnit>) - Static method in class org.apache.spark.sql.types.DataType
-
Returns true if the write data type can be read using the read data type.
- cardinality() - Method in class org.apache.spark.util.sketch.BloomFilter
-
- cartesian(JavaRDDLike<U, ?>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of
elements (a, b) where a is in this
and b is in other
.
- cartesian(RDD<U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of
elements (a, b) where a is in this
and b is in other
.
- CaseInsensitiveStringMap - Class in org.apache.spark.sql.util
-
Case-insensitive map of string keys to string values.
- CaseInsensitiveStringMap(Map<String, String>) - Constructor for class org.apache.spark.sql.util.CaseInsensitiveStringMap
-
- caseSensitive() - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
Whether to do a case sensitive comparison over the stop words.
- cast(DataType) - Method in class org.apache.spark.sql.Column
-
Casts the column to a different data type.
- cast(String) - Method in class org.apache.spark.sql.Column
-
Casts the column to a different data type, using the canonical string representation
of the type.
- castingCauseOverflowError(Object, DataType, DataType) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- castingCauseOverflowErrorInTableInsert(DataType, DataType, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- castPartitionSpec(String, DataType, SQLConf) - Static method in class org.apache.spark.sql.util.PartitioningUtils
-
- Catalog - Class in org.apache.spark.sql.catalog
-
Catalog interface for Spark.
- Catalog() - Constructor for class org.apache.spark.sql.catalog.Catalog
-
- catalog() - Method in class org.apache.spark.sql.catalog.Database
-
- catalog() - Method in class org.apache.spark.sql.catalog.Function
-
- catalog() - Method in class org.apache.spark.sql.catalog.Table
-
- catalog() - Method in class org.apache.spark.sql.SparkSession
-
- catalogFailToCallPublicNoArgConstructorError(String, String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- catalogFailToFindPublicNoArgConstructorError(String, String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- CatalogMetadata - Class in org.apache.spark.sql.catalog
-
A catalog in Spark, as returned by the
listCatalogs
method defined in
Catalog
.
- CatalogMetadata(String, String) - Constructor for class org.apache.spark.sql.catalog.CatalogMetadata
-
- catalogOperationNotSupported(CatalogPlugin, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- catalogPluginClassNotFoundError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- catalogPluginClassNotFoundForCatalogError(String, String, Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- catalogPluginClassNotImplementedError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- catalogString() - Method in class org.apache.spark.sql.types.ArrayType
-
- catalogString() - Static method in class org.apache.spark.sql.types.BinaryType
-
- catalogString() - Static method in class org.apache.spark.sql.types.BooleanType
-
- catalogString() - Static method in class org.apache.spark.sql.types.ByteType
-
- catalogString() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
-
- catalogString() - Method in class org.apache.spark.sql.types.DataType
-
String representation for the type saved in external catalogs.
- catalogString() - Static method in class org.apache.spark.sql.types.DateType
-
- catalogString() - Static method in class org.apache.spark.sql.types.DoubleType
-
- catalogString() - Static method in class org.apache.spark.sql.types.FloatType
-
- catalogString() - Static method in class org.apache.spark.sql.types.IntegerType
-
- catalogString() - Static method in class org.apache.spark.sql.types.LongType
-
- catalogString() - Method in class org.apache.spark.sql.types.MapType
-
- catalogString() - Static method in class org.apache.spark.sql.types.NullType
-
- catalogString() - Static method in class org.apache.spark.sql.types.ShortType
-
- catalogString() - Static method in class org.apache.spark.sql.types.StringType
-
- catalogString() - Method in class org.apache.spark.sql.types.StructType
-
- catalogString() - Static method in class org.apache.spark.sql.types.TimestampNTZType
-
- catalogString() - Static method in class org.apache.spark.sql.types.TimestampType
-
- catalogString() - Method in class org.apache.spark.sql.types.UserDefinedType
-
- CatalystScan - Interface in org.apache.spark.sql.sources
-
::Experimental::
An interface for experimenting with a more direct connection to the query planner.
- Categorical() - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
-
- categoricalCols() - Method in class org.apache.spark.ml.feature.FeatureHasher
-
Numeric columns to treat as categorical features.
- categoricalFeaturesInfo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- CategoricalSplit - Class in org.apache.spark.ml.tree
-
Split which tests a categorical feature.
- categories() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
-
- categories() - Method in class org.apache.spark.mllib.tree.model.Split
-
- categoryMaps() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- categorySizes() - Method in class org.apache.spark.ml.feature.OneHotEncoderModel
-
- cause() - Method in exception org.apache.spark.sql.AnalysisException
-
- cause() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
-
- CausedBy - Class in org.apache.spark.util
-
Extractor Object for pulling out the root cause of an error.
- CausedBy() - Constructor for class org.apache.spark.util.CausedBy
-
- cbrt(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the cube-root of the given value.
- cbrt(String) - Static method in class org.apache.spark.sql.functions
-
Computes the cube-root of the given column.
- ceil(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Computes the ceiling of the given value of e
to scale
decimal places.
- ceil(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the ceiling of the given value of e
to 0 decimal places.
- ceil(String) - Static method in class org.apache.spark.sql.functions
-
Computes the ceiling of the given value of e
to 0 decimal places.
- ceil() - Method in class org.apache.spark.sql.types.Decimal
-
- censorCol() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- censorCol() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- censorCol() - Method in interface org.apache.spark.ml.regression.AFTSurvivalRegressionParams
-
Param for censor column name.
- centerMatrix() - Method in class org.apache.spark.ml.clustering.KMeansAggregator
-
- chainl1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Function2<T, T, T>>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- chainl1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<U>>, Function0<Parsers.Parser<Function2<T, U, T>>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- chainr1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Function2<T, U, U>>>, Function2<T, U, U>, U) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- changePrecision(int, int) - Method in class org.apache.spark.sql.types.Decimal
-
Update precision and scale while keeping our value the same, and return true if successful.
- channel() - Method in interface org.apache.spark.shuffle.api.WritableByteChannelWrapper
-
The underlying channel to write bytes into.
- channelRead0(ChannelHandlerContext, byte[]) - Method in class org.apache.spark.api.r.RBackendAuthHandler
-
- charOrVarcharTypeAsStringUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- CharType - Class in org.apache.spark.sql.types
-
- CharType(int) - Constructor for class org.apache.spark.sql.types.CharType
-
- charTypeMissingLengthError(String, SqlBaseParser.PrimitiveDataTypeContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
-
- checkAndGetK8sMasterUrl(String) - Static method in class org.apache.spark.util.Utils
-
Check the validity of the given Kubernetes master URL and return the resolved URL.
- checkColumnNameDuplication(Seq<String>, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Checks if input column names have duplicate identifiers.
- checkColumnNameDuplication(Seq<String>, boolean) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Checks if input column names have duplicate identifiers.
- checkColumnType(StructType, String, DataType, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Check whether the given schema contains a column of the required data type.
- checkColumnTypes(StructType, String, Seq<DataType>, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Check whether the given schema contains a column of one of the require data types.
- checkDataColumns(RFormula, Dataset<?>) - Static method in class org.apache.spark.ml.r.RWrapperUtils
-
DataFrame column check.
- checkFileExists(String, Configuration) - Static method in class org.apache.spark.streaming.util.HdfsUtils
-
Check if the file exists at the given path.
- checkHost(String) - Static method in class org.apache.spark.util.Utils
-
Checks if the host contains only valid hostname/ip without port
NOTE: Incase of IPV6 ip it should be enclosed inside []
- checkHostPort(String) - Static method in class org.apache.spark.util.Utils
-
- checkIntegers(Dataset<?>, String) - Method in interface org.apache.spark.ml.recommendation.ALSModelParams
-
Attempts to safely cast a user/item id to an Int.
- checkNumericType(StructType, String, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Check whether the given schema contains a column of the numeric data type.
- checkOffHeapEnabled(SparkConf, long) - Static method in class org.apache.spark.util.Utils
-
return 0 if MEMORY_OFFHEAP_ENABLED is false.
- checkpoint() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Mark this RDD for checkpointing.
- checkpoint() - Method in class org.apache.spark.graphx.Graph
-
Mark this Graph for checkpointing.
- checkpoint() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- checkpoint() - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- checkpoint() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- checkpoint() - Method in class org.apache.spark.rdd.HadoopRDD
-
- checkpoint() - Method in class org.apache.spark.rdd.RDD
-
Mark this RDD for checkpointing.
- checkpoint() - Method in class org.apache.spark.sql.Dataset
-
Eagerly checkpoint a Dataset and return the new Dataset.
- checkpoint(boolean) - Method in class org.apache.spark.sql.Dataset
-
Returns a checkpointed version of this Dataset.
- checkpoint(Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Enable periodic checkpointing of RDDs of this DStream.
- checkpoint(String) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.
Sets the context to periodically checkpoint the DStream operations for master
fault-tolerance.
- checkpoint(Duration) - Method in class org.apache.spark.streaming.dstream.DStream
-
Enable periodic checkpointing of RDDs of this DStream
- checkpoint(String) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.
Set the context to periodically checkpoint the DStream operations for driver
fault-tolerance.
- checkpointCleaned(long) - Method in interface org.apache.spark.CleanerListener
-
- checkpointDirectoryHasNotBeenSetInSparkContextError() - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- Checkpointed() - Static method in class org.apache.spark.rdd.CheckpointState
-
- checkpointFailedToSaveError(int, Path) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- CheckpointingInProgress() - Static method in class org.apache.spark.rdd.CheckpointState
-
- checkpointInterval() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- checkpointInterval() - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- checkpointInterval() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- checkpointInterval() - Method in class org.apache.spark.ml.classification.GBTClassifier
-
- checkpointInterval() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- checkpointInterval() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- checkpointInterval() - Method in class org.apache.spark.ml.clustering.LDA
-
- checkpointInterval() - Method in class org.apache.spark.ml.clustering.LDAModel
-
- checkpointInterval() - Method in interface org.apache.spark.ml.param.shared.HasCheckpointInterval
-
Param for set checkpoint interval (>= 1) or disable checkpoint (-1).
- checkpointInterval() - Method in class org.apache.spark.ml.recommendation.ALS
-
- checkpointInterval() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- checkpointInterval() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- checkpointInterval() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- checkpointInterval() - Method in class org.apache.spark.ml.regression.GBTRegressor
-
- checkpointInterval() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- checkpointInterval() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- checkpointInterval() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- checkpointLocationNotSpecifiedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- checkpointRDDBlockIdNotFoundError(RDDBlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- checkpointRDDHasDifferentNumberOfPartitionsFromOriginalRDDError(int, int, int, int) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- CheckpointReader - Class in org.apache.spark.streaming
-
- CheckpointReader() - Constructor for class org.apache.spark.streaming.CheckpointReader
-
- CheckpointState - Class in org.apache.spark.rdd
-
Enumeration to manage state transitions of an RDD through checkpointing
- CheckpointState() - Constructor for class org.apache.spark.rdd.CheckpointState
-
- checkSchemaColumnNameDuplication(DataType, boolean) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Checks if an input schema has duplicate column names.
- checkSchemaColumnNameDuplication(StructType, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Checks if an input schema has duplicate column names.
- checkSingleVsMultiColumnParams(Params, Seq<Param<?>>, Seq<Param<?>>) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Utility for Param validity checks for Transformers which have both single- and multi-column
support.
- checkSpeculatableTasks(long) - Method in interface org.apache.spark.scheduler.Schedulable
-
- checkState(boolean, Function0<String>) - Static method in class org.apache.spark.streaming.util.HdfsUtils
-
- checkThresholdConsistency() - Method in interface org.apache.spark.ml.classification.LogisticRegressionParams
-
If threshold
and thresholds
are both set, ensures they are consistent.
- checkTransformDuplication(Seq<Transform>, String, boolean) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Checks if the partitioning transforms are being duplicated or not.
- checkUIViewPermissions() - Method in interface org.apache.spark.status.api.v1.BaseAppResource
-
- checkUIViewPermissions(String, Option<String>, String) - Method in interface org.apache.spark.status.api.v1.UIRoot
-
- child() - Method in class org.apache.spark.sql.sources.Not
-
- CHILD_CLUSTERS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
- CHILD_CONNECTION_TIMEOUT - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Maximum time (in ms) to wait for a child process to connect back to the launcher server
when using @link{#start()}.
- CHILD_NODES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper
-
- CHILD_PROCESS_LOGGER_NAME - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Logger name to use when launching a child process.
- ChildFirstURLClassLoader - Class in org.apache.spark.util
-
A mutable class loader that gives preference to its own URLs over the parent class loader
when loading classes and resources.
- ChildFirstURLClassLoader(URL[], ClassLoader) - Constructor for class org.apache.spark.util.ChildFirstURLClassLoader
-
- chiSqFunc() - Method in class org.apache.spark.mllib.stat.test.ChiSqTest.Method
-
- ChiSqSelector - Class in org.apache.spark.ml.feature
-
- ChiSqSelector(String) - Constructor for class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- ChiSqSelector() - Constructor for class org.apache.spark.ml.feature.ChiSqSelector
-
Deprecated.
- ChiSqSelector - Class in org.apache.spark.mllib.feature
-
Creates a ChiSquared feature selector.
- ChiSqSelector() - Constructor for class org.apache.spark.mllib.feature.ChiSqSelector
-
- ChiSqSelector(int) - Constructor for class org.apache.spark.mllib.feature.ChiSqSelector
-
The is the same to call this() and setNumTopFeatures(numTopFeatures)
- ChiSqSelectorModel - Class in org.apache.spark.ml.feature
-
- ChiSqSelectorModel - Class in org.apache.spark.mllib.feature
-
Chi Squared selector model.
- ChiSqSelectorModel(int[]) - Constructor for class org.apache.spark.mllib.feature.ChiSqSelectorModel
-
- ChiSqSelectorModel.ChiSqSelectorModelWriter - Class in org.apache.spark.ml.feature
-
- ChiSqSelectorModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.feature
-
- ChiSqSelectorModel.SaveLoadV1_0$.Data - Class in org.apache.spark.mllib.feature
-
Model data for import/export
- ChiSqSelectorModel.SaveLoadV1_0$.Data$ - Class in org.apache.spark.mllib.feature
-
- ChiSqSelectorModelWriter(ChiSqSelectorModel) - Constructor for class org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter
-
- chiSqTest(Vector, Vector) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Conduct Pearson's chi-squared goodness of fit test of the observed data against the
expected distribution.
- chiSqTest(Vector) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Conduct Pearson's chi-squared goodness of fit test of the observed data against the uniform
distribution, with each category having an expected frequency of 1 / observed.size
.
- chiSqTest(Matrix) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Conduct Pearson's independence test on the input contingency matrix, which cannot contain
negative entries or columns or rows that sum up to 0.
- chiSqTest(RDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Conduct Pearson's independence test for every feature against the label across the input RDD.
- chiSqTest(JavaRDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Java-friendly version of chiSqTest()
- ChiSqTest - Class in org.apache.spark.mllib.stat.test
-
Conduct the chi-squared test for the input RDDs using the specified method.
- ChiSqTest() - Constructor for class org.apache.spark.mllib.stat.test.ChiSqTest
-
- ChiSqTest.Method - Class in org.apache.spark.mllib.stat.test
-
param: name String name for the method.
- ChiSqTest.Method$ - Class in org.apache.spark.mllib.stat.test
-
- ChiSqTest.NullHypothesis$ - Class in org.apache.spark.mllib.stat.test
-
- ChiSqTestResult - Class in org.apache.spark.mllib.stat.test
-
Object containing the test results for the chi-squared hypothesis test.
- chiSquared(Vector, Vector, String) - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
-
- chiSquaredFeatures(RDD<LabeledPoint>, String) - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
-
Conduct Pearson's independence test for each feature against the label across the input RDD.
- chiSquaredMatrix(Matrix, String) - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
-
- ChiSquareTest - Class in org.apache.spark.ml.stat
-
Chi-square hypothesis testing for categorical data.
- ChiSquareTest() - Constructor for class org.apache.spark.ml.stat.ChiSquareTest
-
- chmod700(File) - Static method in class org.apache.spark.util.Utils
-
JDK equivalent of chmod 700 file
.
- CholeskyDecomposition - Class in org.apache.spark.mllib.linalg
-
Compute Cholesky decomposition.
- CholeskyDecomposition() - Constructor for class org.apache.spark.mllib.linalg.CholeskyDecomposition
-
- chunkId() - Method in class org.apache.spark.storage.ShuffleBlockChunkId
-
- cipherStream() - Method in interface org.apache.spark.security.CryptoStreamUtils.BaseErrorHandler
-
The encrypted stream that may get into an unhealthy state.
- classDoesNotImplementUserDefinedAggregateFunctionError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- classForName(String, boolean, boolean) - Static method in class org.apache.spark.util.Utils
-
Preferred alternative to Class.forName(className), as well as
Class.forName(className, initialize, loader) with current thread's ContextClassLoader.
- classHasUnexpectedSerializerError(String, Expression) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- Classification() - Static method in class org.apache.spark.mllib.tree.configuration.Algo
-
- ClassificationLoss - Interface in org.apache.spark.mllib.tree.loss
-
- ClassificationModel<FeaturesType,M extends ClassificationModel<FeaturesType,M>> - Class in org.apache.spark.ml.classification
-
- ClassificationModel() - Constructor for class org.apache.spark.ml.classification.ClassificationModel
-
- ClassificationModel - Interface in org.apache.spark.mllib.classification
-
Represents a classification model that predicts to which of a set of categories an example
belongs.
- ClassificationSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for multiclass classification results for a given model.
- Classifier<FeaturesType,E extends Classifier<FeaturesType,E,M>,M extends ClassificationModel<FeaturesType,M>> - Class in org.apache.spark.ml.classification
-
Single-label binary or multiclass classification.
- Classifier() - Constructor for class org.apache.spark.ml.classification.Classifier
-
- classifier() - Method in class org.apache.spark.ml.classification.OneVsRest
-
- classifier() - Method in class org.apache.spark.ml.classification.OneVsRestModel
-
- classifier() - Method in interface org.apache.spark.ml.classification.OneVsRestParams
-
param for the base binary classifier that we reduce multiclass classification into.
- ClassifierParams - Interface in org.apache.spark.ml.classification
-
(private[spark]) Params for classification.
- ClassifierTypeTrait - Interface in org.apache.spark.ml.classification
-
- classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
-
- classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
-
- classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.H2Dialect
-
- classifyException(String, Throwable) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Gets a dialect exception, classifies it and wraps it by AnalysisException
.
- classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
-
- classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
-
- classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
-
- classifyException(String, Throwable) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
-
- classIsLoadable(String) - Static method in class org.apache.spark.util.Utils
-
Determines whether the provided class is loadable in the current thread.
- className() - Method in class org.apache.spark.ExceptionFailure
-
- className() - Static method in class org.apache.spark.ml.linalg.JsonMatrixConverter
-
Unique class name for identifying JSON object encoded by this class.
- className() - Method in class org.apache.spark.sql.catalog.Function
-
- CLASSPATH_ENTRIES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo
-
- classpathEntries() - Method in class org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
-
- classTag() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
- classTag() - Method in class org.apache.spark.api.java.JavaPairRDD
-
- classTag() - Method in class org.apache.spark.api.java.JavaRDD
-
- classTag() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
- classTag() - Method in class org.apache.spark.sql.Dataset
-
- classTag() - Method in class org.apache.spark.storage.memory.DeserializedMemoryEntry
-
- classTag() - Method in interface org.apache.spark.storage.memory.MemoryEntry
-
- classTag() - Method in class org.apache.spark.storage.memory.SerializedMemoryEntry
-
- classTag() - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
- classTag() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
- classTag() - Method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- classTag() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- classTag() - Method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- classUnsupportedByMapObjectsError(Class<?>) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- classWithoutPublicNonArgumentConstructorError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- clean(long, boolean) - Method in class org.apache.spark.streaming.util.WriteAheadLog
-
Clean all the records that are older than the threshold time.
- clean(Object, boolean, boolean) - Static method in class org.apache.spark.util.ClosureCleaner
-
Clean the given closure in place.
- CleanAccum - Class in org.apache.spark
-
- CleanAccum(long) - Constructor for class org.apache.spark.CleanAccum
-
- CleanBroadcast - Class in org.apache.spark
-
- CleanBroadcast(long) - Constructor for class org.apache.spark.CleanBroadcast
-
- CleanCheckpoint - Class in org.apache.spark
-
- CleanCheckpoint(int) - Constructor for class org.apache.spark.CleanCheckpoint
-
- CleanerListener - Interface in org.apache.spark
-
Listener class used when any item has been cleaned by the Cleaner class.
- cleaning() - Method in class org.apache.spark.status.LiveStage
-
- CleanRDD - Class in org.apache.spark
-
- CleanRDD(int) - Constructor for class org.apache.spark.CleanRDD
-
- CleanShuffle - Class in org.apache.spark
-
- CleanShuffle(int) - Constructor for class org.apache.spark.CleanShuffle
-
- cleanShuffleDependencies(boolean) - Method in class org.apache.spark.rdd.RDD
-
Removes an RDD's shuffles and it's non-persisted ancestors.
- CleanSparkListener - Class in org.apache.spark
-
- CleanSparkListener(SparkListener) - Constructor for class org.apache.spark.CleanSparkListener
-
- cleanupApplication() - Method in interface org.apache.spark.shuffle.api.ShuffleDriverComponents
-
Called once at the end of the Spark application to clean up any existing shuffle state.
- cleanupOldBlocks(long) - Method in interface org.apache.spark.streaming.receiver.ReceivedBlockHandler
-
Cleanup old blocks older than the given threshold time
- cleanUpSourceFilesUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- CleanupTask - Interface in org.apache.spark
-
Classes that represent cleaning tasks.
- CleanupTaskWeakReference - Class in org.apache.spark
-
A WeakReference associated with a CleanupTask.
- CleanupTaskWeakReference(CleanupTask, Object, ReferenceQueue<Object>) - Constructor for class org.apache.spark.CleanupTaskWeakReference
-
- clear(Param<?>) - Method in interface org.apache.spark.ml.param.Params
-
Clears the user-supplied value for the input param.
- clear() - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
-
- clear() - Method in class org.apache.spark.sql.util.ExecutionListenerManager
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
- clear() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
- clear() - Static method in class org.apache.spark.util.AccumulatorContext
-
- clearAccumulatorId() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
int64 accumulator_id = 2;
- clearAccumulatorUpdates() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- clearAccumulatorUpdates() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- clearAccumulatorUpdates() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;