- abort(Throwable) - Method in interface org.apache.spark.shuffle.api.ShuffleMapOutputWriter
-
- abort(WriterCommitMessage[]) - Method in interface org.apache.spark.sql.connector.write.BatchWrite
-
- abort() - Method in interface org.apache.spark.sql.connector.write.DataWriter
-
Aborts this writer if it is failed.
- abort(long, WriterCommitMessage[]) - Method in interface org.apache.spark.sql.connector.write.streaming.StreamingWrite
-
- abortStagedChanges() - Method in interface org.apache.spark.sql.connector.catalog.StagedTable
-
Abort the changes that were staged, both in metadata and from temporary outputs of this
table's writers.
- abs(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the absolute value of a numeric value.
- abs(T) - Static method in class org.apache.spark.sql.types.ByteExactNumeric
-
- abs() - Method in class org.apache.spark.sql.types.Decimal
-
- abs(T) - Static method in class org.apache.spark.sql.types.DecimalExactNumeric
-
- abs(double) - Static method in class org.apache.spark.sql.types.DoubleExactNumeric
-
- abs(double) - Method in interface org.apache.spark.sql.types.DoubleType.DoubleIsConflicted
-
- abs(float) - Static method in class org.apache.spark.sql.types.FloatExactNumeric
-
- abs(float) - Method in interface org.apache.spark.sql.types.FloatType.FloatIsConflicted
-
- abs(T) - Static method in class org.apache.spark.sql.types.IntegerExactNumeric
-
- abs(T) - Static method in class org.apache.spark.sql.types.LongExactNumeric
-
- abs(T) - Static method in class org.apache.spark.sql.types.ShortExactNumeric
-
- absent() - Static method in class org.apache.spark.api.java.Optional
-
- AbsoluteError - Class in org.apache.spark.mllib.tree.loss
-
Class for absolute error loss calculation (for regression).
- AbsoluteError() - Constructor for class org.apache.spark.mllib.tree.loss.AbsoluteError
-
- AbstractLauncher<T extends AbstractLauncher<T>> - Class in org.apache.spark.launcher
-
Base class for launcher implementations.
- accept(Parsers) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- accept(ES, Function1<ES, List<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- accept(String, PartialFunction<Object, U>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- accept(Path) - Method in class org.apache.spark.ml.image.SamplePathFilter
-
- acceptIf(Function1<Object, Object>, Function1<Object, String>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- acceptMatch(String, PartialFunction<Object, U>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- acceptSeq(ES, Function1<ES, Iterable<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- AcceptsLatestSeenOffset - Interface in org.apache.spark.sql.connector.read.streaming
-
Indicates that the source accepts the latest seen offset, which requires streaming execution
to provide the latest seen offset when restarting the streaming query from checkpoint.
- acceptsType(DataType) - Method in class org.apache.spark.sql.types.ObjectType
-
- accessNonExistentAccumulatorError(long) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- accId() - Method in class org.apache.spark.CleanAccum
-
- accumCleaned(long) - Method in interface org.apache.spark.CleanerListener
-
- AccumulableInfo - Class in org.apache.spark.scheduler
-
:: DeveloperApi ::
Information about an
AccumulatorV2
modified during a task or stage.
- AccumulableInfo - Class in org.apache.spark.status.api.v1
-
- accumulableInfoFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- AccumulableInfoSerializer - Class in org.apache.spark.status.protobuf
-
- AccumulableInfoSerializer() - Constructor for class org.apache.spark.status.protobuf.AccumulableInfoSerializer
-
- accumulableInfoToJson(AccumulableInfo, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- accumulables() - Method in class org.apache.spark.scheduler.StageInfo
-
Terminal values of accumulables updated during this stage, including all the user-defined
accumulators.
- accumulables() - Method in class org.apache.spark.scheduler.TaskInfo
-
Intermediate updates to accumulables during this task.
- accumulablesToJson(Iterable<AccumulableInfo>, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- ACCUMULATOR_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric
-
- ACCUMULATOR_UPDATES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
- ACCUMULATOR_UPDATES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
- ACCUMULATOR_UPDATES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
- AccumulatorContext - Class in org.apache.spark.util
-
An internal class used to track accumulators by Spark itself.
- AccumulatorContext() - Constructor for class org.apache.spark.util.AccumulatorContext
-
- ACCUMULATORS() - Static method in class org.apache.spark.status.TaskIndexNames
-
- accumulatorUpdates() - Method in class org.apache.spark.status.api.v1.StageData
-
- accumulatorUpdates() - Method in class org.apache.spark.status.api.v1.TaskData
-
- AccumulatorV2<IN,OUT> - Class in org.apache.spark.util
-
The base class for accumulators, that can accumulate inputs of type IN
, and produce output of
type OUT
.
- AccumulatorV2() - Constructor for class org.apache.spark.util.AccumulatorV2
-
- accumUpdates() - Method in class org.apache.spark.ExceptionFailure
-
- accumUpdates() - Method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
-
- accumUpdates() - Method in class org.apache.spark.TaskKilled
-
- accuracy() - Method in interface org.apache.spark.ml.classification.ClassificationSummary
-
Returns accuracy.
- accuracy() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
- accuracy() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns accuracy
- acos(Column) - Static method in class org.apache.spark.sql.functions
-
- acos(String) - Static method in class org.apache.spark.sql.functions
-
- acosh(Column) - Static method in class org.apache.spark.sql.functions
-
- acosh(String) - Static method in class org.apache.spark.sql.functions
-
- acquire(Seq<String>) - Method in interface org.apache.spark.resource.ResourceAllocator
-
Acquire a sequence of resource addresses (to a launched task), these addresses must be
available.
- actionNotAllowedOnTableSincePartitionMetadataNotStoredError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- actionNotAllowedOnTableWithFilesourcePartitionManagementDisabledError(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ActivationFunction - Interface in org.apache.spark.ml.ann
-
Trait for functions and their derivatives for functional layers
- active() - Static method in class org.apache.spark.sql.SparkSession
-
Returns the currently active SparkSession, otherwise the default one.
- active() - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Returns a list of active queries associated with this SQLContext
- active() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- ACTIVE() - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
-
- ACTIVE_TASKS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
- activeIterator() - Method in interface org.apache.spark.ml.linalg.Vector
-
Returns an iterator over all the active elements of this vector.
- activeIterator() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Returns an iterator over all the active elements of this vector.
- activeStages() - Method in class org.apache.spark.status.LiveJob
-
- activeTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- activeTasks() - Method in class org.apache.spark.status.LiveJob
-
- activeTasks() - Method in class org.apache.spark.status.LiveStage
-
- activeTasksPerExecutor() - Method in class org.apache.spark.status.LiveStage
-
- add(Tuple2<Vector, Object>) - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
-
Add a new training instance to this ExpectationAggregator, update the weights,
means and covariances for each distributions, and update the log likelihood.
- add(org.apache.spark.ml.feature.InstanceBlock) - Method in class org.apache.spark.ml.clustering.KMeansAggregator
-
- add(Term) - Static method in class org.apache.spark.ml.feature.Dot
-
- add(Term) - Static method in class org.apache.spark.ml.feature.EmptyTerm
-
- add(Term) - Method in interface org.apache.spark.ml.feature.Term
-
Creates a summation term by concatenation of terms.
- add(Datum) - Method in interface org.apache.spark.ml.optim.aggregator.DifferentiableLossAggregator
-
Add a single data point to this aggregator.
- add(double[], MultivariateGaussian[], ExpectationSum, Vector<Object>) - Static method in class org.apache.spark.mllib.clustering.ExpectationSum
-
- add(Vector) - Method in class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
-
Adds a new document.
- add(BlockMatrix) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Adds the given block matrix other
to this
block matrix: this + other
.
- add(Vector) - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Add a new sample to this summarizer, and update the statistical summary.
- add(StructField) - Method in class org.apache.spark.sql.types.StructType
-
- add(String, DataType) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new nullable field with no metadata.
- add(String, DataType, boolean) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field with no metadata.
- add(String, DataType, boolean, Metadata) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata.
- add(String, DataType, boolean, String) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata.
- add(String, String) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new nullable field with no metadata where the
dataType is specified as a String.
- add(String, String, boolean) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field with no metadata where the
dataType is specified as a String.
- add(String, String, boolean, Metadata) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata where the
dataType is specified as a String.
- add(String, String, boolean, String) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata where the
dataType is specified as a String.
- add(Long) - Method in class org.apache.spark.sql.util.MapperRowCounter
-
- add(double) - Method in class org.apache.spark.sql.util.NumericHistogram
-
Adds a new data point to the histogram approximation.
- add(T) - Method in class org.apache.spark.sql.util.SQLOpenHashSet
-
- add(long, long) - Static method in class org.apache.spark.streaming.util.RawTextHelper
-
- add(IN) - Method in class org.apache.spark.util.AccumulatorV2
-
Takes the inputs and accumulates.
- add(T) - Method in class org.apache.spark.util.CollectionAccumulator
-
- add(Double) - Method in class org.apache.spark.util.DoubleAccumulator
-
Adds v to the accumulator, i.e.
- add(double) - Method in class org.apache.spark.util.DoubleAccumulator
-
Adds v to the accumulator, i.e.
- add(Long) - Method in class org.apache.spark.util.LongAccumulator
-
Adds v to the accumulator, i.e.
- add(long) - Method in class org.apache.spark.util.LongAccumulator
-
Adds v to the accumulator, i.e.
- add(Object) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by one.
- add(Object, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by count
.
- add_months(Column, int) - Static method in class org.apache.spark.sql.functions
-
Returns the date that is numMonths
after startDate
.
- add_months(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the date that is numMonths
after startDate
.
- ADD_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
- ADD_TIME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary
-
- addAccumulatorUpdates(StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdates(int, StoreTypes.AccumulableInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdatesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdatesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAccumulatorUpdatesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdatesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdatesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAccumulatorUpdatesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAddresses(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- addAddressesBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- addAllAccumulatorUpdates(Iterable<? extends StoreTypes.AccumulableInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 44;
- addAllAccumulatorUpdates(Iterable<? extends StoreTypes.AccumulableInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAllAccumulatorUpdates(Iterable<? extends StoreTypes.AccumulableInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.AccumulableInfo accumulator_updates = 13;
- addAllAddresses(Iterable<String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
repeated string addresses = 2;
- addAllAttempts(Iterable<? extends StoreTypes.ApplicationAttemptInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAllBlacklistedInStages(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 blacklisted_in_stages = 25;
- addAllBytesRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double bytes_read = 1;
- addAllBytesWritten(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double bytes_written = 1;
- addAllChildClusters(Iterable<? extends StoreTypes.RDDOperationClusterWrapper>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addAllChildNodes(Iterable<? extends StoreTypes.RDDOperationNode>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addAllClasspathEntries(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addAllCorruptMergedBlockChunks(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double corrupt_merged_block_chunks = 1;
- addAllDataDistribution(Iterable<? extends StoreTypes.RDDDataDistribution>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addAllDiskBytesSpilled(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double disk_bytes_spilled = 15;
- addAllDiskBytesSpilled(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double disk_bytes_spilled = 14;
- addAllDuration(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double duration = 2;
- addAllEdges(Iterable<? extends StoreTypes.RDDOperationEdge>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addAllEdges(Iterable<? extends StoreTypes.SparkPlanGraphEdge>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addAllExcludedInStages(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 excluded_in_stages = 31;
- addAllExecutorCpuTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_cpu_time = 6;
- addAllExecutorDeserializeCpuTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_cpu_time = 4;
- addAllExecutorDeserializeTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_time = 3;
- addAllExecutorMetrics(Iterable<? extends StoreTypes.ExecutorMetrics>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addAllExecutorRunTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_run_time = 5;
- addAllExecutors(Iterable<String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- addAllFailedTasks(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double failed_tasks = 3;
- addAllFetchWaitTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double fetch_wait_time = 5;
- addAllGettingResultTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double getting_result_time = 10;
- addAllHadoopProperties(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addAllIncomingEdges(Iterable<? extends StoreTypes.RDDOperationEdge>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addAllInputBytes(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_bytes = 6;
- addAllInputRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_records = 7;
- addAllJobIds(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
repeated int64 job_ids = 2;
- addAllJobTags(Iterable<String>) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated string job_tags = 21;
- addAllJvmGcTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double jvm_gc_time = 8;
- addAllKilledTasks(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double killed_tasks = 5;
- addAllLocalBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double local_blocks_fetched = 4;
- addAllLocalMergedBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_blocks_fetched = 4;
- addAllLocalMergedBytesRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_bytes_read = 8;
- addAllLocalMergedChunksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_chunks_fetched = 6;
- addAllMemoryBytesSpilled(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double memory_bytes_spilled = 14;
- addAllMemoryBytesSpilled(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double memory_bytes_spilled = 13;
- addAllMergedFetchFallbackCount(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double merged_fetch_fallback_count = 2;
- addAllMetrics(Iterable<? extends StoreTypes.SQLPlanMetric>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addAllMetrics(Iterable<? extends StoreTypes.SQLPlanMetric>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addAllMetrics(Iterable<? extends StoreTypes.SQLPlanMetric>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addAllMetricsProperties(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addAllNodes(Iterable<? extends StoreTypes.SparkPlanGraphNodeWrapper>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addAllNodes(Iterable<? extends StoreTypes.SparkPlanGraphNodeWrapper>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addAllOutgoingEdges(Iterable<? extends StoreTypes.RDDOperationEdge>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addAllOutputBytes(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_bytes = 8;
- addAllOutputRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_records = 9;
- addAllPartitions(Iterable<? extends StoreTypes.RDDPartitionInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addAllPeakExecutionMemory(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double peak_execution_memory = 12;
- addAllQuantiles(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double quantiles = 1;
- addAllQuantiles(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated double quantiles = 1;
- addAllQuantiles(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double quantiles = 1;
- addAllRddIds(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated int64 rdd_ids = 43;
- addAllReadBytes(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_bytes = 1;
- addAllReadRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_records = 2;
- addAllRecordsRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double records_read = 2;
- addAllRecordsWritten(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double records_written = 2;
- addAllRemoteBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_blocks_fetched = 3;
- addAllRemoteBytesRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read = 6;
- addAllRemoteBytesReadToDisk(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read_to_disk = 7;
- addAllRemoteMergedBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_blocks_fetched = 3;
- addAllRemoteMergedBytesRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_bytes_read = 7;
- addAllRemoteMergedChunksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_chunks_fetched = 5;
- addAllRemoteMergedReqsDuration(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_reqs_duration = 9;
- addAllRemoteReqsDuration(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_reqs_duration = 9;
- addAllResourceProfiles(Iterable<? extends StoreTypes.ResourceProfileInfo>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addAllResultSerializationTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_serialization_time = 9;
- addAllResultSize(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_size = 7;
- addAllSchedulerDelay(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double scheduler_delay = 11;
- addAllShuffleRead(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read = 10;
- addAllShuffleReadRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read_records = 11;
- addAllShuffleWrite(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write = 12;
- addAllShuffleWriteRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write_records = 13;
- addAllSkippedStages(Iterable<? extends Integer>) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
repeated int32 skipped_stages = 2;
- addAllSources(Iterable<? extends StoreTypes.SourceProgress>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addAllSparkProperties(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addAllStageIds(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated int64 stage_ids = 6;
- addAllStageIds(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
repeated int64 stage_ids = 2;
- addAllStages(Iterable<? extends Long>) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated int64 stages = 12;
- addAllStateOperators(Iterable<? extends StoreTypes.StateOperatorProgress>) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addAllSucceededTasks(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double succeeded_tasks = 4;
- addAllSystemProperties(Iterable<? extends StoreTypes.PairStrings>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addAllTaskTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double task_time = 2;
- addAllTotalBlocksFetched(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double total_blocks_fetched = 8;
- addAllWriteBytes(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_bytes = 1;
- addAllWriteRecords(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_records = 2;
- addAllWriteTime(Iterable<? extends Double>) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_time = 3;
- addAppArgs(String...) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds command line arguments for the application.
- addAppArgs(String...) - Method in class org.apache.spark.launcher.SparkLauncher
-
- addArchive(String) - Method in class org.apache.spark.SparkContext
-
:: Experimental ::
Add an archive to be downloaded and unpacked with this Spark job on every node.
- addAttempts(StoreTypes.ApplicationAttemptInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttempts(int, StoreTypes.ApplicationAttemptInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttempts(StoreTypes.ApplicationAttemptInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttempts(int, StoreTypes.ApplicationAttemptInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttemptsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addAttemptsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ApplicationAttemptInfo attempts = 7;
- addBin(double, double, int) - Method in class org.apache.spark.sql.util.NumericHistogram
-
Set a particular histogram bin with index.
- addBinary(byte[]) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by one.
- addBinary(byte[], long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by count
.
- addBlacklistedInStages(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 blacklisted_in_stages = 25;
- addBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double bytes_read = 1;
- addBytesWritten(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double bytes_written = 1;
- addCatalogInCacheTableAsSelectNotAllowedError(String, SqlBaseParser.CacheTableContext) - Static method in class org.apache.spark.sql.errors.QueryParsingErrors
-
- addChildClusters(StoreTypes.RDDOperationClusterWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClusters(int, StoreTypes.RDDOperationClusterWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClusters(StoreTypes.RDDOperationClusterWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClusters(int, StoreTypes.RDDOperationClusterWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClustersBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildClustersBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationClusterWrapper child_clusters = 4;
- addChildNodes(StoreTypes.RDDOperationNode) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodes(int, StoreTypes.RDDOperationNode) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodes(StoreTypes.RDDOperationNode.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodes(int, StoreTypes.RDDOperationNode.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChildNodesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationNode child_nodes = 3;
- addChunk(ShuffleBlockChunkId, RoaringBitmap) - Method in class org.apache.spark.storage.PushBasedFetchHelper
-
This is executed by the task thread when the iterator.next()
is invoked and the iterator
processes a response of type ShuffleBlockFetcherIterator.PushMergedLocalMetaFetchResult
.
- addClasspathEntries(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntries(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntries(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntries(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntriesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addClasspathEntriesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings classpath_entries = 6;
- addColumn(String[], DataType) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for adding an optional column.
- addColumn(String[], DataType, boolean) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for adding a column.
- addColumn(String[], DataType, boolean, String) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for adding a column.
- addColumn(String[], DataType, boolean, String, TableChange.ColumnPosition, ColumnDefaultValue) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange
-
Create a TableChange for adding a column.
- addColumnWithV1TableCannotSpecifyNotNullError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- addCorruptMergedBlockChunks(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double corrupt_merged_block_chunks = 1;
- addDataDistribution(StoreTypes.RDDDataDistribution) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistribution(int, StoreTypes.RDDDataDistribution) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistribution(StoreTypes.RDDDataDistribution.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistribution(int, StoreTypes.RDDDataDistribution.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistributionBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDataDistributionBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDDataDistribution data_distribution = 8;
- addDirectoryError(Path) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- addDiskBytesSpilled(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double disk_bytes_spilled = 15;
- addDiskBytesSpilled(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double disk_bytes_spilled = 14;
- addDuration(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double duration = 2;
- addEdges(StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdges(int, StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdges(StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdges(int, StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdges(StoreTypes.SparkPlanGraphEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdges(int, StoreTypes.SparkPlanGraphEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdges(StoreTypes.SparkPlanGraphEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdges(int, StoreTypes.SparkPlanGraphEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdgesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge edges = 2;
- addEdgesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphEdge edges = 3;
- addExcludedInStages(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
repeated int64 excluded_in_stages = 31;
- addExecutorCpuTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_cpu_time = 6;
- addExecutorDeserializeCpuTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_cpu_time = 4;
- addExecutorDeserializeTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_deserialize_time = 3;
- addExecutorMetrics(StoreTypes.ExecutorMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetrics(int, StoreTypes.ExecutorMetrics) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetrics(StoreTypes.ExecutorMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetrics(int, StoreTypes.ExecutorMetrics.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated .org.apache.spark.status.protobuf.ExecutorMetrics executor_metrics = 2;
- addExecutorRunTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double executor_run_time = 5;
- addExecutors(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- addExecutorsBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
repeated string executors = 5;
- addFailedTasks(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double failed_tasks = 3;
- addFetchWaitTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double fetch_wait_time = 5;
- addFile(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFile(String, boolean) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFile(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds a file to be submitted with the application.
- addFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
- addFile(String) - Method in class org.apache.spark.SparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFile(String, boolean) - Method in class org.apache.spark.SparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFilesWithAbsolutePathUnsupportedError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- addFilter(ServletContextHandler, String, Map<String, String>) - Static method in class org.apache.spark.ui.JettyUtils
-
- addGettingResultTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double getting_result_time = 10;
- addGrid(Param<T>, Iterable<T>) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a param with multiple values (overwrites if the input param exists).
- addGrid(DoubleParam, double[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a double param with multiple values.
- addGrid(IntParam, int[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds an int param with multiple values.
- addGrid(FloatParam, float[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a float param with multiple values.
- addGrid(LongParam, long[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a long param with multiple values.
- addGrid(BooleanParam) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a boolean param with true and false.
- addHadoopProperties(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopProperties(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopPropertiesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addHadoopPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings hadoop_properties = 3;
- addIncomingEdges(StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdges(int, StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdges(StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdges(int, StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdgesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addIncomingEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge incoming_edges = 4;
- addInputBytes(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_bytes = 6;
- addInputRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double input_records = 7;
- addJar(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
- addJar(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds a jar file to be submitted with the application.
- addJar(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
- addJar(String) - Method in class org.apache.spark.SparkContext
-
Adds a JAR dependency for all tasks to be executed on this SparkContext
in the future.
- addJarsToClassPath(String, MutableURLClassLoader) - Static method in class org.apache.spark.util.DependencyUtils
-
- addJarToClasspath(String, MutableURLClassLoader) - Static method in class org.apache.spark.util.DependencyUtils
-
- addJobIds(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
repeated int64 job_ids = 2;
- addJobTag(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Add a tag to be assigned to all the jobs started by this thread.
- addJobTag(String) - Method in class org.apache.spark.SparkContext
-
Add a tag to be assigned to all the jobs started by this thread.
- addJobTags(String) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated string job_tags = 21;
- addJobTagsBytes(ByteString) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated string job_tags = 21;
- addJvmGcTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double jvm_gc_time = 8;
- addKilledTasks(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double killed_tasks = 5;
- addListener(SparkAppHandle.Listener) - Method in interface org.apache.spark.launcher.SparkAppHandle
-
Adds a listener to be notified of changes to the handle's information.
- addListener(StreamingQueryListener) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
- addListener(L) - Method in interface org.apache.spark.util.ListenerBus
-
Add a listener to listen events.
- addLocalBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double local_blocks_fetched = 4;
- addLocalConfiguration(String, int, int, int, JobConf) - Static method in class org.apache.spark.rdd.HadoopRDD
-
Add Hadoop configuration specific to a single partition and attempt.
- addLocalDirectoryError(Path) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- addLocalMergedBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_blocks_fetched = 4;
- addLocalMergedBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_bytes_read = 8;
- addLocalMergedChunksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double local_merged_chunks_fetched = 6;
- addLong(long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by one.
- addLong(long, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by count
.
- addMapOutput(int, MapStatus) - Method in class org.apache.spark.ShuffleStatus
-
Register a map output.
- addMemoryBytesSpilled(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double memory_bytes_spilled = 14;
- addMemoryBytesSpilled(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double memory_bytes_spilled = 13;
- addMergedFetchFallbackCount(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double merged_fetch_fallback_count = 2;
- addMergeResult(int, org.apache.spark.scheduler.MergeStatus) - Method in class org.apache.spark.ShuffleStatus
-
Register a merge result.
- addMetrics(TaskMetrics, TaskMetrics) - Static method in class org.apache.spark.status.LiveEntityHelpers
-
Add m2 values to m1.
- addMetrics(StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetrics(int, StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetrics(StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetrics(int, StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetrics(StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetrics(int, StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetrics(StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetrics(int, StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetrics(StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetrics(int, StoreTypes.SQLPlanMetric) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetrics(StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetrics(int, StoreTypes.SQLPlanMetric.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 5;
- addMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 4;
- addMetricsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetricsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated .org.apache.spark.status.protobuf.SQLPlanMetric metrics = 7;
- addMetricsProperties(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsProperties(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsPropertiesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addMetricsPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings metrics_properties = 5;
- addNaN() - Method in class org.apache.spark.sql.util.SQLOpenHashSet
-
- addNewDefaultColumnToExistingTableNotAllowed(String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- addNewFunctionMismatchedWithFunctionError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- addNodes(StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodes(int, StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodes(StoreTypes.SparkPlanGraphNodeWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodes(int, StoreTypes.SparkPlanGraphNodeWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodes(StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodes(int, StoreTypes.SparkPlanGraphNodeWrapper) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodes(StoreTypes.SparkPlanGraphNodeWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodes(int, StoreTypes.SparkPlanGraphNodeWrapper.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 4;
- addNodesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNodesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.SparkPlanGraphNodeWrapper nodes = 2;
- addNull() - Method in class org.apache.spark.sql.util.SQLOpenHashSet
-
- addOutgoingEdges(StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdges(int, StoreTypes.RDDOperationEdge) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdges(StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdges(int, StoreTypes.RDDOperationEdge.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdgesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutgoingEdgesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
repeated .org.apache.spark.status.protobuf.RDDOperationEdge outgoing_edges = 3;
- addOutputBytes(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_bytes = 8;
- addOutputRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double output_records = 9;
- addPartition(LiveRDDPartition) - Method in class org.apache.spark.status.RDDPartitionSeq
-
- addPartitions(StoreTypes.RDDPartitionInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitions(int, StoreTypes.RDDPartitionInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitions(StoreTypes.RDDPartitionInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitions(int, StoreTypes.RDDPartitionInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitionsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartitionsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
repeated .org.apache.spark.status.protobuf.RDDPartitionInfo partitions = 9;
- addPartToPGroup(Partition, PartitionGroup) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
- addPeakExecutionMemory(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double peak_execution_memory = 12;
- addPyFile(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds a python file / zip / egg to be submitted with the application.
- addPyFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
- addQuantiles(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double quantiles = 1;
- addQuantiles(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
repeated double quantiles = 1;
- addQuantiles(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double quantiles = 1;
- addRddIds(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
repeated int64 rdd_ids = 43;
- addReadBytes(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_bytes = 1;
- addReadRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double read_records = 2;
- addRecordsRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
repeated double records_read = 2;
- addRecordsWritten(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
repeated double records_written = 2;
- addRemoteBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_blocks_fetched = 3;
- addRemoteBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read = 6;
- addRemoteBytesReadToDisk(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_bytes_read_to_disk = 7;
- addRemoteMergedBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_blocks_fetched = 3;
- addRemoteMergedBytesRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_bytes_read = 7;
- addRemoteMergedChunksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_chunks_fetched = 5;
- addRemoteMergedReqsDuration(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
repeated double remote_merged_reqs_duration = 9;
- addRemoteReqsDuration(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double remote_reqs_duration = 9;
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
- addRepeatedField(Descriptors.FieldDescriptor, Object) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
- addRequest(TaskResourceRequest) - Method in class org.apache.spark.resource.TaskResourceRequests
-
- addResourceProfiles(StoreTypes.ResourceProfileInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfiles(int, StoreTypes.ResourceProfileInfo) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfiles(StoreTypes.ResourceProfileInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfiles(int, StoreTypes.ResourceProfileInfo.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfilesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- addResourceProfilesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.ResourceProfileInfo resource_profiles = 7;
- address() - Method in class org.apache.spark.BarrierTaskInfo
-
- address() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
-
- ADDRESS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution
-
- addresses() - Method in class org.apache.spark.resource.ResourceInformation
-
- addresses() - Method in class org.apache.spark.resource.ResourceInformationJson
-
- ADDRESSES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation
-
- addResultSerializationTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_serialization_time = 9;
- addResultSize(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double result_size = 7;
- addSchedulable(Schedulable) - Method in interface org.apache.spark.scheduler.Schedulable
-
- addSchedulerDelay(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
repeated double scheduler_delay = 11;
- addShuffleRead(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read = 10;
- addShuffleReadRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_read_records = 11;
- addShuffleWrite(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write = 12;
- addShuffleWriteRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double shuffle_write_records = 13;
- addShutdownHook(Function0<BoxedUnit>) - Static method in class org.apache.spark.util.ShutdownHookManager
-
Adds a shutdown hook with default priority.
- addShutdownHook(int, Function0<BoxedUnit>) - Static method in class org.apache.spark.util.ShutdownHookManager
-
Adds a shutdown hook with the given priority.
- addSkippedStages(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
repeated int32 skipped_stages = 2;
- addSources(StoreTypes.SourceProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSources(int, StoreTypes.SourceProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSources(StoreTypes.SourceProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSources(int, StoreTypes.SourceProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSourcesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSourcesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.SourceProgress sources = 10;
- addSparkArg(String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds a no-value argument to the Spark invocation.
- addSparkArg(String, String) - Method in class org.apache.spark.launcher.AbstractLauncher
-
Adds an argument with a value to the Spark invocation.
- addSparkArg(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
- addSparkArg(String, String) - Method in class org.apache.spark.launcher.SparkLauncher
-
- addSparkListener(SparkListenerInterface) - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi ::
Register a listener to receive up-calls from events that happen during execution.
- addSparkProperties(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkProperties(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkPropertiesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addSparkPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings spark_properties = 2;
- addStageIds(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
repeated int64 stage_ids = 6;
- addStageIds(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
repeated int64 stage_ids = 2;
- addStages(long) - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
repeated int64 stages = 12;
- addStateOperators(StoreTypes.StateOperatorProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperators(int, StoreTypes.StateOperatorProgress) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperators(StoreTypes.StateOperatorProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperators(int, StoreTypes.StateOperatorProgress.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperatorsBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStateOperatorsBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
repeated .org.apache.spark.status.protobuf.StateOperatorProgress state_operators = 9;
- addStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.
- addStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.
- addString(String) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by one.
- addString(String, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by count
.
- addSucceededTasks(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double succeeded_tasks = 4;
- addSystemProperties(StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemProperties(int, StoreTypes.PairStrings) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemProperties(StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemProperties(int, StoreTypes.PairStrings.Builder) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemPropertiesBuilder() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addSystemPropertiesBuilder(int) - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
repeated .org.apache.spark.status.protobuf.PairStrings system_properties = 4;
- addTaskCompletionListener(TaskCompletionListener) - Method in class org.apache.spark.BarrierTaskContext
-
- addTaskCompletionListener(TaskCompletionListener) - Method in class org.apache.spark.TaskContext
-
Adds a (Java friendly) listener to be executed on task completion.
- addTaskCompletionListener(Function1<TaskContext, U>) - Method in class org.apache.spark.TaskContext
-
Adds a listener in the form of a Scala closure to be executed on task completion.
- addTaskFailureListener(TaskFailureListener) - Method in class org.apache.spark.BarrierTaskContext
-
- addTaskFailureListener(TaskFailureListener) - Method in class org.apache.spark.TaskContext
-
Adds a listener to be executed on task failure (which includes completion listener failure, if
the task body did not already fail).
- addTaskFailureListener(Function2<TaskContext, Throwable, BoxedUnit>) - Method in class org.apache.spark.TaskContext
-
Adds a listener to be executed on task failure (which includes completion listener failure, if
the task body did not already fail).
- addTaskResourceRequests(SparkConf, TaskResourceRequests) - Static method in class org.apache.spark.resource.ResourceUtils
-
- addTaskSetManager(Schedulable, Properties) - Method in interface org.apache.spark.scheduler.SchedulableBuilder
-
- addTaskTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
repeated double task_time = 2;
- addTime() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- addTime() - Method in class org.apache.spark.status.api.v1.ProcessSummary
-
- addTotalBlocksFetched(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
repeated double total_blocks_fetched = 8;
- addURL(URL) - Method in class org.apache.spark.util.MutableURLClassLoader
-
- AddWebUIFilter(String, Map<String, String>, String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
-
- AddWebUIFilter$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter$
-
- addWriteBytes(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_bytes = 1;
- addWriteRecords(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_records = 2;
- addWriteTime(double) - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
repeated double write_time = 3;
- advisoryPartitionSizeInBytes() - Method in interface org.apache.spark.sql.connector.write.RequiresDistributionAndOrdering
-
Returns the advisory (not guaranteed) shuffle partition size in bytes for this write.
- aes_decrypt(Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a decrypted value of input
using AES in mode
with padding
.
- aes_decrypt(Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a decrypted value of input
.
- aes_decrypt(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a decrypted value of input
.
- aes_decrypt(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns a decrypted value of input
.
- aes_encrypt(Column, Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an encrypted value of input
using AES in given mode
with the specified padding
.
- aes_encrypt(Column, Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an encrypted value of input
.
- aes_encrypt(Column, Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an encrypted value of input
.
- aes_encrypt(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an encrypted value of input
.
- aes_encrypt(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an encrypted value of input
.
- aesCryptoError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- aesModeUnsupportedError(String, String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- aesUnsupportedAad(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- aesUnsupportedIv(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- after(String) - Static method in interface org.apache.spark.sql.connector.catalog.TableChange.ColumnPosition
-
- AFTSurvivalRegression - Class in org.apache.spark.ml.regression
-
- AFTSurvivalRegression(String) - Constructor for class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- AFTSurvivalRegression() - Constructor for class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- AFTSurvivalRegressionModel - Class in org.apache.spark.ml.regression
-
- AFTSurvivalRegressionParams - Interface in org.apache.spark.ml.regression
-
Params for accelerated failure time (AFT) regression.
- agg(Column, Column...) - Method in class org.apache.spark.sql.Dataset
-
Aggregates on the entire Dataset without groups.
- agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.Dataset
-
(Scala-specific) Aggregates on the entire Dataset without groups.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.Dataset
-
(Scala-specific) Aggregates on the entire Dataset without groups.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.Dataset
-
(Java-specific) Aggregates on the entire Dataset without groups.
- agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.Dataset
-
Aggregates on the entire Dataset without groups.
- agg(TypedColumn<V, U1>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregation, returning a
Dataset
of tuples for each unique key
and the result of computing this aggregation over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>, TypedColumn<V, U7>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>, TypedColumn<V, U5>, TypedColumn<V, U6>, TypedColumn<V, U7>, TypedColumn<V, U8>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(Column, Column...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute aggregates by specifying a series of aggregate columns.
- agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
(Scala-specific) Compute aggregates by specifying the column names and
aggregate methods.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
(Scala-specific) Compute aggregates by specifying a map from column name to
aggregate methods.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
(Java-specific) Compute aggregates by specifying a map from column name to
aggregate methods.
- agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute aggregates by specifying a series of aggregate columns.
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Aggregate the elements of each partition, and then the results for all the partitions, using
given combine functions and a neutral "zero value".
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Aggregate the elements of each partition, and then the results for all the partitions, using
given combine functions and a neutral "zero value".
- aggregate(Column, Column, Function2<Column, Column, Column>, Function1<Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Applies a binary operator to an initial state and all elements in the array,
and reduces this to a single state.
- aggregate(Column, Column, Function2<Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Applies a binary operator to an initial state and all elements in the array,
and reduces this to a single state.
- aggregateByKey(U, Partitioner, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, int, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, Partitioner, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, int, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- AggregatedDialect - Class in org.apache.spark.sql.jdbc
-
AggregatedDialect can unify multiple dialects into one virtual Dialect.
- AggregatedDialect(List<JdbcDialect>) - Constructor for class org.apache.spark.sql.jdbc.AggregatedDialect
-
- aggregateExpressionRequiredForPivotError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- aggregateExpressions() - Method in class org.apache.spark.sql.connector.expressions.aggregate.Aggregation
-
- AggregateFunc - Interface in org.apache.spark.sql.connector.expressions.aggregate
-
Base class of the Aggregate Functions.
- AggregateFunction<S extends java.io.Serializable,R> - Interface in org.apache.spark.sql.connector.catalog.functions
-
Interface for a function that produces a result value by aggregating over multiple input rows.
- aggregateInAggregateFilterError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- aggregateMessages(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, ClassTag<A>) - Method in class org.apache.spark.graphx.Graph
-
Aggregates values from the neighboring edges and vertices of each vertex.
- aggregateMessagesWithActiveSet(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, Option<Tuple2<VertexRDD<?>, EdgeDirection>>, ClassTag<A>) - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- aggregateTaskMetrics(long[]) - Method in class org.apache.spark.sql.connector.metric.CustomAvgMetric
-
- aggregateTaskMetrics(long[]) - Method in interface org.apache.spark.sql.connector.metric.CustomMetric
-
Given an array of task metric values, returns aggregated final metric value.
- aggregateTaskMetrics(long[]) - Method in class org.apache.spark.sql.connector.metric.CustomSumMetric
-
- aggregateUsingIndex(RDD<Tuple2<Object, VD2>>, Function2<VD2, VD2, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- aggregateUsingIndex(RDD<Tuple2<Object, VD2>>, Function2<VD2, VD2, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.VertexRDD
-
Aggregates vertices in messages
that have the same ids using reduceFunc
, returning a
VertexRDD co-indexed with this
.
- AggregatingEdgeContext<VD,ED,A> - Class in org.apache.spark.graphx.impl
-
- AggregatingEdgeContext(Function2<A, A, A>, Object, BitSet) - Constructor for class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- Aggregation - Class in org.apache.spark.sql.connector.expressions.aggregate
-
Aggregation in SQL statement.
- Aggregation(AggregateFunc[], Expression[]) - Constructor for class org.apache.spark.sql.connector.expressions.aggregate.Aggregation
-
- aggregationDepth() - Method in class org.apache.spark.ml.classification.LinearSVC
-
- aggregationDepth() - Method in class org.apache.spark.ml.classification.LinearSVCModel
-
- aggregationDepth() - Method in class org.apache.spark.ml.classification.LogisticRegression
-
- aggregationDepth() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- aggregationDepth() - Method in class org.apache.spark.ml.clustering.GaussianMixture
-
- aggregationDepth() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- aggregationDepth() - Method in interface org.apache.spark.ml.param.shared.HasAggregationDepth
-
Param for suggested depth for treeAggregate (>= 2).
- aggregationDepth() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- aggregationDepth() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- aggregationDepth() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- aggregationDepth() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- aggregationDepth() - Method in class org.apache.spark.ml.regression.LinearRegression
-
- aggregationDepth() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- aggregationFunctionAppliedOnNonNumericColumnError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- aggregationFunctionAppliedOnNonNumericColumnError(String, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- aggregationNotAllowedInMergeCondition(String, Expression) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- Aggregator<K,V,C> - Class in org.apache.spark
-
:: DeveloperApi ::
A set of functions used to aggregate data.
- Aggregator(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>) - Constructor for class org.apache.spark.Aggregator
-
- aggregator() - Method in class org.apache.spark.ShuffleDependency
-
- Aggregator<IN,BUF,OUT> - Class in org.apache.spark.sql.expressions
-
A base class for user-defined aggregations, which can be used in Dataset
operations to take
all of the elements of a group and reduce them to a single value.
- Aggregator() - Constructor for class org.apache.spark.sql.expressions.Aggregator
-
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
-
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
-
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
-
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
-
- aic() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
- Algo - Class in org.apache.spark.mllib.tree.configuration
-
Enum to select the algorithm for the decision tree
- Algo() - Constructor for class org.apache.spark.mllib.tree.configuration.Algo
-
- algo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- algo() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
- algo() - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
- algo() - Method in class org.apache.spark.mllib.tree.model.RandomForestModel
-
- algorithm() - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
- alias(String) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias.
- alias(String) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with an alias set.
- alias(Symbol) - Method in class org.apache.spark.sql.Dataset
-
(Scala-specific) Returns a new Dataset with an alias set.
- aliasesNumberNotMatchUDTFOutputError(int, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- aliasNumberNotMatchColumnNumberError(int, int, TreeNode<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- All - Static variable in class org.apache.spark.graphx.TripletFields
-
Expose all the fields (source, edge, and destination).
- ALL_GATHER() - Static method in class org.apache.spark.RequestMethod
-
- ALL_REMOVALS_TIME_MS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
- ALL_UPDATES_TIME_MS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress
-
- allAvailable() - Static method in interface org.apache.spark.sql.connector.read.streaming.ReadLimit
-
- allGather(String) - Method in class org.apache.spark.BarrierTaskContext
-
:: Experimental ::
Blocks until all tasks in the same stage have reached this routine.
- AllJobsCancelled - Class in org.apache.spark.scheduler
-
- AllJobsCancelled() - Constructor for class org.apache.spark.scheduler.AllJobsCancelled
-
- allocate(int) - Method in class org.apache.spark.sql.util.NumericHistogram
-
Sets the number of histogram bins to use for approximating data.
- allocator() - Method in class org.apache.spark.storage.memory.SerializedValuesHolder
-
- AllReceiverIds - Class in org.apache.spark.streaming.scheduler
-
A message used by ReceiverTracker to ask all receiver's ids still stored in
ReceiverTrackerEndpoint.
- AllReceiverIds() - Constructor for class org.apache.spark.streaming.scheduler.AllReceiverIds
-
- allRemovalsTimeMs() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
-
- allSources() - Static method in class org.apache.spark.metrics.source.StaticSources
-
The set of all static sources.
- allSupportedExecutorResources() - Static method in class org.apache.spark.resource.ResourceProfile
-
Return all supported Spark built-in executor resources, custom resources like GPUs/FPGAs
are excluded.
- allUpdatesTimeMs() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
-
- alpha() - Method in class org.apache.spark.ml.recommendation.ALS
-
- alpha() - Method in interface org.apache.spark.ml.recommendation.ALSParams
-
Param for the alpha parameter in the implicit preference formulation (nonnegative).
- alpha() - Method in class org.apache.spark.mllib.random.WeibullGenerator
-
- ALS - Class in org.apache.spark.ml.recommendation
-
Alternating Least Squares (ALS) matrix factorization.
- ALS(String) - Constructor for class org.apache.spark.ml.recommendation.ALS
-
- ALS() - Constructor for class org.apache.spark.ml.recommendation.ALS
-
- ALS - Class in org.apache.spark.mllib.recommendation
-
Alternating Least Squares matrix factorization.
- ALS() - Constructor for class org.apache.spark.mllib.recommendation.ALS
-
Constructs an ALS instance with default parameters: {numBlocks: -1, rank: 10, iterations: 10,
lambda: 0.01, implicitPrefs: false, alpha: 1.0}.
- ALS.InBlock$ - Class in org.apache.spark.ml.recommendation
-
- ALS.LeastSquaresNESolver - Interface in org.apache.spark.ml.recommendation
-
Trait for least squares solvers applied to the normal equation.
- ALS.Rating<ID> - Class in org.apache.spark.ml.recommendation
-
Rating class for better code readability.
- ALS.Rating$ - Class in org.apache.spark.ml.recommendation
-
- ALS.RatingBlock$ - Class in org.apache.spark.ml.recommendation
-
- ALSModel - Class in org.apache.spark.ml.recommendation
-
Model fitted by ALS.
- ALSModelParams - Interface in org.apache.spark.ml.recommendation
-
Common params for ALS and ALSModel.
- ALSParams - Interface in org.apache.spark.ml.recommendation
-
Common params for ALS.
- alterAddColNotSupportDatasourceTableError(Object, TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterAddColNotSupportViewError(TableIdentifier) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterColumnCannotFindColumnInV1TableError(String, org.apache.spark.sql.connector.catalog.V1Table) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterColumnWithV1TableCannotSpecifyNotNullError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterDatabaseLocationUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterNamespace(String[], NamespaceChange...) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
-
- alterNamespace(String[], NamespaceChange...) - Method in interface org.apache.spark.sql.connector.catalog.SupportsNamespaces
-
Apply a set of metadata changes to a namespace in the catalog.
- alterTable(Identifier, TableChange...) - Method in class org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension
-
- alterTable(Identifier, TableChange...) - Method in interface org.apache.spark.sql.connector.catalog.TableCatalog
-
Apply a set of
changes
to a table in the catalog.
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.H2Dialect
-
- alterTable(String, Seq<TableChange>, int) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Alter an existing table.
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
-
- alterTable(String, Seq<TableChange>, int) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
-
- alterTableChangeColumnNotSupportedForColumnTypeError(String, StructField, StructField) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterTableRecoverPartitionsNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterTableSerDePropertiesNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterTableSetSerdeForSpecificPartitionNotSupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterTableSetSerdeNotSupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterTableWithDropPartitionAndPurgeUnsupportedError() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- alterV2TableSetLocationWithPartitionNotSupportedError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- alterView(Identifier, ViewChange...) - Method in interface org.apache.spark.sql.connector.catalog.ViewCatalog
-
Apply
changes
to a view in the catalog.
- AlwaysFalse - Class in org.apache.spark.sql.connector.expressions.filter
-
A predicate that always evaluates to false
.
- AlwaysFalse() - Constructor for class org.apache.spark.sql.connector.expressions.filter.AlwaysFalse
-
- AlwaysFalse - Class in org.apache.spark.sql.sources
-
A filter that always evaluates to false
.
- AlwaysFalse() - Constructor for class org.apache.spark.sql.sources.AlwaysFalse
-
- AlwaysTrue - Class in org.apache.spark.sql.connector.expressions.filter
-
A predicate that always evaluates to true
.
- AlwaysTrue() - Constructor for class org.apache.spark.sql.connector.expressions.filter.AlwaysTrue
-
- AlwaysTrue - Class in org.apache.spark.sql.sources
-
A filter that always evaluates to true
.
- AlwaysTrue() - Constructor for class org.apache.spark.sql.sources.AlwaysTrue
-
- am() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
-
- ambiguousAttributesInSelfJoinError(Seq<AttributeReference>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ambiguousColumnOrFieldError(Seq<String>, int) - Method in interface org.apache.spark.sql.errors.CompilationErrors
-
- ambiguousColumnOrFieldError(Seq<String>, int, Origin) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
-
- ambiguousColumnOrFieldError(Seq<String>, int, Origin) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ambiguousLateralColumnAliasError(String, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ambiguousLateralColumnAliasError(Seq<String>, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ambiguousReferenceError(String, Seq<Attribute>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ambiguousReferenceToFieldsError(String, int) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- ambiguousRelationAliasNameInNestedCTEError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- amount() - Method in class org.apache.spark.resource.ExecutorResourceRequest
-
- amount() - Method in class org.apache.spark.resource.ResourceRequest
-
- AMOUNT() - Static method in class org.apache.spark.resource.ResourceUtils
-
- amount() - Method in class org.apache.spark.resource.TaskResourceRequest
-
- AMOUNT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest
-
- AMOUNT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest
-
- AnalysisException - Exception in org.apache.spark.sql
-
Thrown when a query fails to analyze, usually because the query itself is invalid.
- AnalysisException(String, Map<String, String>, Option<Throwable>) - Constructor for exception org.apache.spark.sql.AnalysisException
-
- AnalysisException(String, Map<String, String>, QueryContext[], String) - Constructor for exception org.apache.spark.sql.AnalysisException
-
- AnalysisException(String, Map<String, String>) - Constructor for exception org.apache.spark.sql.AnalysisException
-
- AnalysisException(String, Map<String, String>, Origin) - Constructor for exception org.apache.spark.sql.AnalysisException
-
- AnalysisException(String, Map<String, String>, Origin, Option<Throwable>) - Constructor for exception org.apache.spark.sql.AnalysisException
-
- analyzeTableNotSupportedForV2TablesError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- analyzeTableNotSupportedOnViewsError() - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- analyzingColumnStatisticsNotSupportedForColumnTypeError(String, DataType) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- and(Column) - Method in class org.apache.spark.sql.Column
-
Boolean AND.
- And - Class in org.apache.spark.sql.connector.expressions.filter
-
A predicate that evaluates to true
iff both left
and right
evaluate to
true
.
- And(Predicate, Predicate) - Constructor for class org.apache.spark.sql.connector.expressions.filter.And
-
- And - Class in org.apache.spark.sql.sources
-
A filter that evaluates to true
iff both left
or right
evaluate to true
.
- And(Filter, Filter) - Constructor for class org.apache.spark.sql.sources.And
-
- ANOVATest - Class in org.apache.spark.ml.stat
-
ANOVA Test for continuous data.
- ANOVATest() - Constructor for class org.apache.spark.ml.stat.ANOVATest
-
- ansiDateTimeError(Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- ansiDateTimeParseError(Exception) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- ansiIllegalArgumentError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- ansiIllegalArgumentError(IllegalArgumentException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- antecedent() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
-
- ANY() - Static method in class org.apache.spark.scheduler.TaskLocality
-
- any(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns true if at least one value of e
is true.
- any_value(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns some value of e
for a group of rows.
- any_value(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns some value of e
for a group of rows.
- AnyDataType - Class in org.apache.spark.sql.types
-
An AbstractDataType
that matches any concrete data types.
- AnyDataType() - Constructor for class org.apache.spark.sql.types.AnyDataType
-
- anyNull() - Method in interface org.apache.spark.sql.Row
-
Returns true if there are any NULL values in this row.
- anyNull() - Method in class org.apache.spark.sql.vectorized.ColumnarBatchRow
-
- anyNull() - Method in class org.apache.spark.sql.vectorized.ColumnarRow
-
- AnyTimestampType - Class in org.apache.spark.sql.types
-
- AnyTimestampType() - Constructor for class org.apache.spark.sql.types.AnyTimestampType
-
- AnyTimestampTypeExpression - Class in org.apache.spark.sql.types
-
- AnyTimestampTypeExpression() - Constructor for class org.apache.spark.sql.types.AnyTimestampTypeExpression
-
- ApiHelper - Class in org.apache.spark.ui.jobs
-
- ApiHelper() - Constructor for class org.apache.spark.ui.jobs.ApiHelper
-
- ApiRequestContext - Interface in org.apache.spark.status.api.v1
-
- APP_SPARK_VERSION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
- appAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- append() - Method in class org.apache.spark.sql.DataFrameWriterV2
-
Append the contents of the data frame to the output table.
- Append() - Static method in class org.apache.spark.sql.streaming.OutputMode
-
OutputMode in which only the new rows in the streaming DataFrame/Dataset will be
written to the sink.
- appendBias(Vector) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Returns a new vector with 1.0
(bias) appended to the input vector.
- appendColumn(StructType, String, DataType, boolean) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Appends a new column to the input schema.
- appendColumn(StructType, StructField) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Appends a new column to the input schema.
- AppHistoryServerPlugin - Interface in org.apache.spark.status
-
An interface for creating history listeners(to replay event logs) defined in other modules like
SQL, and setup the UI of the plugin to rebuild the history UI.
- appId() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- appId() - Method in interface org.apache.spark.status.api.v1.BaseAppResource
-
- appId() - Method in class org.apache.spark.storage.ShuffleMergedDataBlockId
-
- appId() - Method in class org.apache.spark.storage.ShuffleMergedIndexBlockId
-
- appId() - Method in class org.apache.spark.storage.ShuffleMergedMetaBlockId
-
- APPLICATION_EXECUTOR_LIMIT() - Static method in class org.apache.spark.ui.ToolTips
-
- APPLICATION_MASTER() - Static method in class org.apache.spark.metrics.MetricsSystemInstances
-
- applicationAttemptId() - Method in interface org.apache.spark.scheduler.SchedulerBackend
-
Get the attempt ID for this run, if the cluster manager supports multiple
attempts.
- applicationAttemptId() - Method in interface org.apache.spark.scheduler.TaskScheduler
-
Get an application's attempt ID associated with the job.
- applicationAttemptId() - Method in class org.apache.spark.SparkContext
-
- ApplicationAttemptInfo - Class in org.apache.spark.status.api.v1
-
- applicationEndFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- applicationEndToJson(SparkListenerApplicationEnd, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- ApplicationEnvironmentInfo - Class in org.apache.spark.status.api.v1
-
- applicationId() - Method in interface org.apache.spark.scheduler.SchedulerBackend
-
Get an application ID associated with the job.
- applicationId() - Method in interface org.apache.spark.scheduler.TaskScheduler
-
Get an application ID associated with the job.
- applicationId() - Method in class org.apache.spark.SparkContext
-
A unique identifier for the Spark application.
- ApplicationInfo - Class in org.apache.spark.status.api.v1
-
- APPLICATIONS() - Static method in class org.apache.spark.metrics.MetricsSystemInstances
-
- applicationStartFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- applicationStartToJson(SparkListenerApplicationStart, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- ApplicationStatus - Enum in org.apache.spark.status.api.v1
-
- apply(T1) - Static method in class org.apache.spark.CleanAccum
-
- apply(T1) - Static method in class org.apache.spark.CleanBroadcast
-
- apply(T1) - Static method in class org.apache.spark.CleanCheckpoint
-
- apply(T1) - Static method in class org.apache.spark.CleanRDD
-
- apply(T1) - Static method in class org.apache.spark.CleanShuffle
-
- apply(T1) - Static method in class org.apache.spark.CleanSparkListener
-
- apply(T1, T2) - Static method in class org.apache.spark.ContextBarrierId
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.ErrorInfo
-
- apply(int) - Static method in class org.apache.spark.ErrorMessageFormat
-
- apply(T1) - Static method in class org.apache.spark.ErrorSubInfo
-
- apply(T1, T2, T3, T4, T5, T6, T7, T8) - Static method in class org.apache.spark.ExceptionFailure
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.ExecutorLostFailure
-
- apply(T1) - Static method in class org.apache.spark.ExecutorRegistered
-
- apply(T1) - Static method in class org.apache.spark.ExecutorRemoved
-
- apply(T1, T2, T3, T4, T5, T6) - Static method in class org.apache.spark.FetchFailed
-
- apply(RDD<Tuple2<Object, VD>>, RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.Graph
-
Construct a graph from a collection of vertices and
edges with attributes.
- apply(RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from edges, setting referenced vertices to defaultVertexAttr
.
- apply(RDD<Tuple2<Object, VD>>, RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from vertices and edges, setting missing vertices to defaultVertexAttr
.
- apply(VertexRDD<VD>, EdgeRDD<ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from a VertexRDD and an EdgeRDD with arbitrary replicated vertices.
- apply(Graph<VD, ED>, A, int, EdgeDirection, Function3<Object, VD, A, VD>, Function1<EdgeTriplet<VD, ED>, Iterator<Tuple2<Object, A>>>, Function2<A, A, A>, ClassTag<VD>, ClassTag<ED>, ClassTag<A>) - Static method in class org.apache.spark.graphx.Pregel
-
Execute a Pregel-like iterative vertex-parallel abstraction.
- apply(RDD<Tuple2<Object, VD>>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
-
Constructs a standalone
VertexRDD
(one that is not set up for efficient joins with an
EdgeRDD
) from an RDD of vertex-attribute pairs.
- apply(RDD<Tuple2<Object, VD>>, EdgeRDD<?>, VD, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
-
Constructs a VertexRDD
from an RDD of vertex-attribute pairs.
- apply(RDD<Tuple2<Object, VD>>, EdgeRDD<?>, VD, Function2<VD, VD, VD>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
-
Constructs a VertexRDD
from an RDD of vertex-attribute pairs.
- apply(DenseMatrix<Object>, DenseMatrix<Object>, Function1<Object, Object>) - Static method in class org.apache.spark.ml.ann.ApplyInPlace
-
- apply(DenseMatrix<Object>, DenseMatrix<Object>, DenseMatrix<Object>, Function2<Object, Object, Object>) - Static method in class org.apache.spark.ml.ann.ApplyInPlace
-
- apply(String) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Gets an attribute by its name.
- apply(int) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Gets an attribute by its index.
- apply(T1, T2) - Static method in class org.apache.spark.ml.clustering.ClusterData
-
- apply(T1, T2) - Static method in class org.apache.spark.ml.feature.LabeledPoint
-
- apply(int, int) - Method in class org.apache.spark.ml.linalg.DenseMatrix
-
- apply(int) - Method in class org.apache.spark.ml.linalg.DenseVector
-
- apply(int, int) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Gets the (i, j)-th element.
- apply(int, int) - Method in class org.apache.spark.ml.linalg.SparseMatrix
-
- apply(int) - Method in class org.apache.spark.ml.linalg.SparseVector
-
- apply(int) - Method in interface org.apache.spark.ml.linalg.Vector
-
Gets the value of the ith element.
- apply(Param<T>) - Method in class org.apache.spark.ml.param.ParamMap
-
Gets the value of the input param or its default value if it does not exist.
- apply(GeneralizedLinearRegressionBase) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink$
-
Constructs the FamilyAndLink object from a parameter map
- apply(T1) - Static method in class org.apache.spark.ml.SaveInstanceEnd
-
- apply(T1) - Static method in class org.apache.spark.ml.SaveInstanceStart
-
- apply() - Static method in class org.apache.spark.ml.TransformEnd
-
- apply() - Static method in class org.apache.spark.ml.TransformStart
-
- apply(Split) - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData$
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
-
- apply(Row) - Method in class org.apache.spark.mllib.clustering.KMeansModel.Cluster$
-
- apply(BinaryConfusionMatrix) - Method in interface org.apache.spark.mllib.evaluation.binary.BinaryClassificationMetricComputer
-
- apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.FalsePositiveRate
-
- apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.Precision
-
- apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.Recall
-
- apply(T1) - Static method in class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data
-
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.mllib.feature.VocabWord
-
- apply(int, int) - Method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- apply(int) - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- apply(T1, T2) - Static method in class org.apache.spark.mllib.linalg.distributed.IndexedRow
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
-
- apply(int, int) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Gets the (i, j)-th element.
- apply(int, int) - Method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- apply(int) - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- apply(int) - Method in interface org.apache.spark.mllib.linalg.Vector
-
Gets the value of the ith element.
- apply(T1, T2, T3) - Static method in class org.apache.spark.mllib.recommendation.Rating
-
- apply(T1, T2) - Static method in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data
-
- apply(T1, T2) - Static method in class org.apache.spark.mllib.stat.test.BinarySample
-
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.Algo
-
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
-
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
-
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
-
- apply(int, Node) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData$
-
- apply(Row) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData$
-
- apply(int, Node) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
-
- apply(Row) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
-
- apply(Predict) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData$
-
- apply(Row) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData$
-
- apply(Predict) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
-
- apply(Row) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
-
- apply(Split) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData$
-
- apply(Row) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData$
-
- apply(Split) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
-
- apply(Row) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
-
- apply(int, Predict, double, boolean) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Construct a node with nodeIndex, predict, impurity and isLeaf parameters.
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.mllib.tree.model.Split
-
- apply(int) - Static method in class org.apache.spark.rdd.CheckpointState
-
- apply(int) - Static method in class org.apache.spark.rdd.DeterministicLevel
-
- apply(int) - Static method in class org.apache.spark.RequestMethod
-
- apply(T1, T2) - Static method in class org.apache.spark.resource.ResourceInformationJson
-
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.scheduler.AccumulableInfo
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- apply(String, long, Enumeration.Value, ByteBuffer, int, Map<String, ResourceInformation>) - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate$
-
Alternate factory method that takes a ByteBuffer directly for the data field
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.ExcludedExecutor
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.local.KillTask
-
- apply() - Static method in class org.apache.spark.scheduler.local.ReviveOffers
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.local.StatusUpdate
-
- apply() - Static method in class org.apache.spark.scheduler.local.StopExecutor
-
- apply(long, TaskMetrics) - Static method in class org.apache.spark.scheduler.RuntimePercentage
-
- apply(int) - Static method in class org.apache.spark.scheduler.SchedulingMode
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerApplicationEnd
-
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
-
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerBlockUpdated
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
Deprecated.
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklistedForStage
-
Deprecated.
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorExcluded
-
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorExcludedForStage
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
-
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
-
Deprecated.
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnexcluded
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerJobEnd
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.scheduler.SparkListenerJobStart
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerLogStart
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerMiscellaneousProcessAdded
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
Deprecated.
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklistedForStage
-
Deprecated.
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerNodeExcluded
-
- apply(T1, T2, T3, T4, T5) - Static method in class org.apache.spark.scheduler.SparkListenerNodeExcludedForStage
-
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
-
Deprecated.
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnexcluded
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerResourceProfileAdded
-
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerStageCompleted
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.scheduler.SparkListenerStageExecutorMetrics
-
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerStageSubmitted
-
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.scheduler.SparkListenerTaskEnd
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerTaskGettingResult
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.scheduler.SparkListenerTaskStart
-
- apply(T1) - Static method in class org.apache.spark.scheduler.SparkListenerUnpersistRDD
-
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetAdded
-
- apply(T1, T2) - Static method in class org.apache.spark.scheduler.SparkListenerUnschedulableTaskSetRemoved
-
- apply(int) - Static method in class org.apache.spark.scheduler.TaskLocality
-
- apply(Object) - Method in class org.apache.spark.sql.Column
-
Extracts a value or values from a complex type.
- apply(String, Expression...) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
-
Create a logical transform for applying a named transform.
- apply(String, Seq<Expression>) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
-
- apply(String) - Method in class org.apache.spark.sql.Dataset
-
Selects column based on the column name and returns it as a
Column
.
- apply(Column...) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.
Creates a Column
for this UDAF using given Column
s as input arguments.
- apply(Seq<Column>) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.
Creates a Column
for this UDAF using given Column
s as input arguments.
- apply(Column...) - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Returns an expression that invokes the UDF, using the given arguments.
- apply(Seq<Column>) - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Returns an expression that invokes the UDF, using the given arguments.
- apply(T1, T2) - Static method in class org.apache.spark.sql.jdbc.JdbcType
-
- apply() - Static method in class org.apache.spark.sql.Observation
-
Observation constructor for creating an anonymous observation.
- apply(String) - Static method in class org.apache.spark.sql.Observation
-
Observation constructor for creating a named observation.
- apply(Dataset<Row>, Seq<Expression>, RelationalGroupedDataset.GroupType) - Static method in class org.apache.spark.sql.RelationalGroupedDataset
-
- apply(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i.
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.And
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.EqualNullSafe
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.EqualTo
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.GreaterThan
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.GreaterThanOrEqual
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.In
-
- apply(T1) - Static method in class org.apache.spark.sql.sources.IsNotNull
-
- apply(T1) - Static method in class org.apache.spark.sql.sources.IsNull
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.LessThan
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.LessThanOrEqual
-
- apply(T1) - Static method in class org.apache.spark.sql.sources.Not
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.Or
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.StringContains
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.StringEndsWith
-
- apply(T1, T2) - Static method in class org.apache.spark.sql.sources.StringStartsWith
-
- apply(String, Option<Object>, Map<String, String>) - Static method in class org.apache.spark.sql.streaming.SinkProgress
-
- apply(DataType) - Static method in class org.apache.spark.sql.types.ArrayType
-
Construct a
ArrayType
object with the given element type.
- apply(T1) - Static method in class org.apache.spark.sql.types.CharType
-
- apply() - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
-
- apply(byte) - Static method in class org.apache.spark.sql.types.DayTimeIntervalType
-
- apply(double) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(long) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(int) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigDecimal) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigDecimal) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigInteger) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigInt) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigDecimal, int, int) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigDecimal, int, int) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(long, int, int) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(String) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(DataType, DataType) - Static method in class org.apache.spark.sql.types.MapType
-
Construct a
MapType
object with the given key type and value type.
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.sql.types.StructField
-
- apply(String) - Method in class org.apache.spark.sql.types.StructType
-
- apply(Set<String>) - Method in class org.apache.spark.sql.types.StructType
-
Returns a
StructType
containing
StructField
s of the given names, preserving the
original order of fields.
- apply(int) - Method in class org.apache.spark.sql.types.StructType
-
- apply(T1) - Static method in class org.apache.spark.sql.types.VarcharType
-
- apply() - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
-
- apply(byte) - Static method in class org.apache.spark.sql.types.YearMonthIntervalType
-
- apply(T1, T2, T3, T4, T5, T6, T7, T8) - Static method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.status.api.v1.ApplicationInfo
-
- apply(T1, T2) - Static method in class org.apache.spark.status.api.v1.sql.Metric
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.status.api.v1.sql.Node
-
- apply(T1) - Static method in class org.apache.spark.status.api.v1.StackTrace
-
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.status.api.v1.ThreadStackTrace
-
- apply(int) - Method in class org.apache.spark.status.RDDPartitionSeq
-
- apply(String) - Static method in class org.apache.spark.storage.BlockId
-
- apply(String, String, int, Option<String>) - Static method in class org.apache.spark.storage.BlockManagerId
-
- apply(ObjectInput) - Static method in class org.apache.spark.storage.BlockManagerId
-
- apply(T1, T2) - Static method in class org.apache.spark.storage.BroadcastBlockId
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.CacheId
-
- apply(T1, T2) - Static method in class org.apache.spark.storage.RDDBlockId
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleBlockBatchId
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleBlockChunkId
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleBlockId
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleChecksumBlockId
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleDataBlockId
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleIndexBlockId
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.storage.ShuffleMergedBlockId
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleMergedDataBlockId
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleMergedIndexBlockId
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShuffleMergedMetaBlockId
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.storage.ShufflePushBlockId
-
- apply(boolean, boolean, boolean, boolean, int) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi ::
Create a new StorageLevel object.
- apply(boolean, boolean, boolean, int) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi ::
Create a new StorageLevel object without setting useOffHeap.
- apply(int, int) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi ::
Create a new StorageLevel object from its integer representation.
- apply(ObjectInput) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi ::
Read StorageLevel object from ObjectInput stream.
- apply(T1, T2) - Static method in class org.apache.spark.storage.StreamBlockId
-
- apply(T1) - Static method in class org.apache.spark.storage.TaskResultBlockId
-
- apply(T1) - Static method in class org.apache.spark.streaming.Duration
-
- apply(long) - Static method in class org.apache.spark.streaming.Milliseconds
-
- apply(long) - Static method in class org.apache.spark.streaming.Minutes
-
- apply(T1, T2, T3, T4, T5, T6) - Static method in class org.apache.spark.streaming.scheduler.BatchInfo
-
- apply(T1, T2, T3, T4, T5, T6, T7) - Static method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
-
- apply(T1, T2, T3, T4, T5, T6, T7, T8) - Static method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- apply(int) - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
-
- apply(T1) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
-
- apply(long) - Static method in class org.apache.spark.streaming.Seconds
-
- apply(T1, T2, T3) - Static method in class org.apache.spark.TaskCommitDenied
-
- apply(T1, T2, T3, T4) - Static method in class org.apache.spark.TaskKilled
-
- apply(int) - Static method in class org.apache.spark.TaskState
-
- apply(TraversableOnce<Object>) - Static method in class org.apache.spark.util.StatCounter
-
Build a StatCounter from a list of values.
- apply(Seq<Object>) - Static method in class org.apache.spark.util.StatCounter
-
Build a StatCounter from a list of values passed as variable-length arguments.
- ApplyInPlace - Class in org.apache.spark.ml.ann
-
Implements in-place application of functions in the arrays
- ApplyInPlace() - Constructor for class org.apache.spark.ml.ann.ApplyInPlace
-
- applyNamespaceChanges(Map<String, String>, Seq<NamespaceChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Apply properties changes to a map and return the result.
- applyNamespaceChanges(Map<String, String>, Seq<NamespaceChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Apply properties changes to a Java map and return the result.
- applyPropertiesChanges(Map<String, String>, Seq<TableChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Apply properties changes to a map and return the result.
- applyPropertiesChanges(Map<String, String>, Seq<TableChange>) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Apply properties changes to a Java map and return the result.
- applySchema(RDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
-
- applySchema(JavaRDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
-
- applySchema(RDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
-
- applySchema(JavaRDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
-
- applySchemaChanges(StructType, Seq<TableChange>, Option<String>, String) - Static method in class org.apache.spark.sql.connector.catalog.CatalogV2Util
-
Apply schema changes to a schema and return the result.
- appName() - Method in class org.apache.spark.api.java.JavaSparkContext
-
- appName() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- appName() - Method in class org.apache.spark.SparkContext
-
- appName(String) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a name for the application, which will be shown in the Spark web UI.
- approx_count_distinct(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approx_count_distinct(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approx_count_distinct(Column, double) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approx_count_distinct(String, double) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approx_percentile(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate percentile
of the numeric column col
which
is the smallest value in the ordered col
values (sorted from least to greatest) such that
no more than percentage
of col
values is less than the value or equal to that value.
- approxCountDistinct(Column) - Static method in class org.apache.spark.sql.functions
-
- approxCountDistinct(String) - Static method in class org.apache.spark.sql.functions
-
- approxCountDistinct(Column, double) - Static method in class org.apache.spark.sql.functions
-
- approxCountDistinct(String, double) - Static method in class org.apache.spark.sql.functions
-
- ApproxHist() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
-
- ApproximateEvaluator<U,R> - Interface in org.apache.spark.partial
-
An object that computes a function incrementally by merging in results of type U from multiple
tasks.
- approxQuantile(String, double[], double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Calculates the approximate quantiles of a numerical column of a DataFrame.
- approxQuantile(String[], double[], double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Calculates the approximate quantiles of numerical columns of a DataFrame.
- appSparkVersion() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- AppStatusUtils - Class in org.apache.spark.status
-
- AppStatusUtils() - Constructor for class org.apache.spark.status.AppStatusUtils
-
- archives() - Method in class org.apache.spark.SparkContext
-
- AreaUnderCurve - Class in org.apache.spark.mllib.evaluation
-
Computes the area under the curve (AUC) using the trapezoidal rule.
- AreaUnderCurve() - Constructor for class org.apache.spark.mllib.evaluation.AreaUnderCurve
-
- areaUnderPR() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Computes the area under the precision-recall curve.
- areaUnderROC() - Method in interface org.apache.spark.ml.classification.BinaryClassificationSummary
-
Computes the area under the receiver operating characteristic (ROC) curve.
- areaUnderROC() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
-
- areaUnderROC() - Method in class org.apache.spark.ml.classification.BinaryRandomForestClassificationSummaryImpl
-
- areaUnderROC() - Method in class org.apache.spark.ml.classification.FMClassificationSummaryImpl
-
- areaUnderROC() - Method in class org.apache.spark.ml.classification.LinearSVCSummaryImpl
-
- areaUnderROC() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Computes the area under the receiver operating characteristic (ROC) curve.
- argmax() - Method in class org.apache.spark.ml.linalg.DenseVector
-
- argmax() - Method in class org.apache.spark.ml.linalg.SparseVector
-
- argmax() - Method in interface org.apache.spark.ml.linalg.Vector
-
Find the index of a maximal element.
- argmax() - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- argmax() - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- argmax() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Find the index of a maximal element.
- arguments() - Method in interface org.apache.spark.sql.connector.expressions.Transform
-
Returns the arguments passed to the transform function.
- arithmeticOverflowError(String, String, SQLQueryContext) - Method in interface org.apache.spark.sql.errors.ExecutionErrors
-
- arithmeticOverflowError(ArithmeticException) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- arithmeticOverflowError$default$2() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- arithmeticOverflowError$default$3() - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- ARPACK - Class in org.apache.spark.mllib.linalg
-
ARPACK routines for MLlib's vectors and matrices.
- ARPACK() - Constructor for class org.apache.spark.mllib.linalg.ARPACK
-
- array(DataType) - Method in class org.apache.spark.sql.ColumnName
-
Creates a new StructField
of type array.
- array(Column...) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array(String, String...) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array() - Method in class org.apache.spark.sql.vectorized.ColumnarArray
-
- array_agg(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns a list of objects with duplicates.
- array_append(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Returns an ARRAY containing all elements from the source ARRAY as well as the new element.
- array_compact(Column) - Static method in class org.apache.spark.sql.functions
-
Remove all null elements from the given array.
- array_contains(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Returns null if the array is null, true if the array contains value
, and false otherwise.
- array_distinct(Column) - Static method in class org.apache.spark.sql.functions
-
Removes duplicate values from the array.
- array_except(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an array of the elements in the first array but not in the second array,
without duplicates.
- array_insert(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Adds an item into a given array at a specified position
- array_intersect(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an array of the elements in the intersection of the given two arrays,
without duplicates.
- array_join(Column, String, String) - Static method in class org.apache.spark.sql.functions
-
Concatenates the elements of column
using the delimiter
.
- array_join(Column, String) - Static method in class org.apache.spark.sql.functions
-
Concatenates the elements of column
using the delimiter
.
- array_max(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the maximum value in the array.
- array_min(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the minimum value in the array.
- array_position(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Locates the position of the first occurrence of the value in the given array as long.
- array_prepend(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Returns an array containing value as well as all elements from array.
- array_remove(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Remove all elements that equal to element from the given array.
- array_repeat(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Creates an array containing the left argument repeated the number of times given by the
right argument.
- array_repeat(Column, int) - Static method in class org.apache.spark.sql.functions
-
Creates an array containing the left argument repeated the number of times given by the
right argument.
- array_size(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the total number of elements in the array.
- array_sort(Column) - Static method in class org.apache.spark.sql.functions
-
Sorts the input array in ascending order.
- array_sort(Column, Function2<Column, Column, Column>) - Static method in class org.apache.spark.sql.functions
-
Sorts the input array based on the given comparator function.
- array_to_vector(Column) - Static method in class org.apache.spark.ml.functions
-
Converts a column of array of numeric type into a column of dense vectors in MLlib.
- array_union(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns an array of the elements in the union of the given two arrays, without duplicates.
- arrayComponentTypeUnsupportedError(Class<?>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- arrayLengthGt(double) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check that the array length is greater than lowerBound.
- arrays_overlap(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns true
if a1
and a2
have at least one non-null element in common.
- arrays_zip(Column...) - Static method in class org.apache.spark.sql.functions
-
Returns a merged array of structs in which the N-th struct contains all N-th values of input
arrays.
- arrays_zip(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Returns a merged array of structs in which the N-th struct contains all N-th values of input
arrays.
- ArrayType - Class in org.apache.spark.sql.types
-
- ArrayType(DataType, boolean) - Constructor for class org.apache.spark.sql.types.ArrayType
-
- arrayValues() - Method in class org.apache.spark.storage.memory.DeserializedValuesHolder
-
- ArrowColumnVector - Class in org.apache.spark.sql.vectorized
-
A column vector backed by Apache Arrow.
- ArrowColumnVector(ValueVector) - Constructor for class org.apache.spark.sql.vectorized.ArrowColumnVector
-
- ArrowUtils - Class in org.apache.spark.sql.util
-
- ArrowUtils() - Constructor for class org.apache.spark.sql.util.ArrowUtils
-
- as(Encoder<U>) - Method in class org.apache.spark.sql.Column
-
Provides a type hint about the expected return value of this column.
- as(String) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias.
- as(Seq<String>) - Method in class org.apache.spark.sql.Column
-
(Scala-specific) Assigns the given aliases to the results of a table generating function.
- as(String[]) - Method in class org.apache.spark.sql.Column
-
Assigns the given aliases to the results of a table generating function.
- as(Symbol) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias.
- as(String, Metadata) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias with metadata.
- as(Encoder<U>) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset where each record has been mapped on to the specified type.
- as(String) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with an alias set.
- as(Symbol) - Method in class org.apache.spark.sql.Dataset
-
(Scala-specific) Returns a new Dataset with an alias set.
- as(Encoder<K>, Encoder<T>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Returns a KeyValueGroupedDataset
where the data is grouped by the grouping expressions
of current RelationalGroupedDataset
.
- asBinary() - Method in interface org.apache.spark.ml.classification.LogisticRegressionSummary
-
Convenient method for casting to binary logistic regression summary.
- asBinary() - Method in interface org.apache.spark.ml.classification.RandomForestClassificationSummary
-
Convenient method for casting to BinaryRandomForestClassificationSummary.
- asBreeze() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts to a breeze matrix.
- asBreeze() - Method in interface org.apache.spark.ml.linalg.Vector
-
Converts the instance to a breeze vector.
- asBreeze() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Converts to a breeze matrix.
- asBreeze() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Converts the instance to a breeze vector.
- asc() - Method in class org.apache.spark.sql.Column
-
Returns a sort expression based on ascending order of the column.
- asc(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on ascending order of the column.
- asc_nulls_first() - Method in class org.apache.spark.sql.Column
-
Returns a sort expression based on ascending order of the column,
and null values return before non-null values.
- asc_nulls_first(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on ascending order of the column,
and null values return before non-null values.
- asc_nulls_last() - Method in class org.apache.spark.sql.Column
-
Returns a sort expression based on ascending order of the column,
and null values appear after non-null values.
- asc_nulls_last(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on ascending order of the column,
and null values appear after non-null values.
- asCaseSensitiveMap() - Method in class org.apache.spark.sql.util.CaseInsensitiveStringMap
-
Returns the original case-sensitive map.
- ascii(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the numeric value of the first character of the string column, and returns the
result as an int column.
- asFunctionCatalog() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
-
- asFunctionIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
-
- asFunctionIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
-
- asIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
-
- asin(Column) - Static method in class org.apache.spark.sql.functions
-
- asin(String) - Static method in class org.apache.spark.sql.functions
-
- asinh(Column) - Static method in class org.apache.spark.sql.functions
-
- asinh(String) - Static method in class org.apache.spark.sql.functions
-
- asInteraction() - Static method in class org.apache.spark.ml.feature.Dot
-
- asInteraction() - Method in interface org.apache.spark.ml.feature.InteractableTerm
-
Convert to ColumnInteraction to wrap all interactions.
- asIterator() - Method in class org.apache.spark.serializer.DeserializationStream
-
Read the elements of this stream through an iterator.
- asJavaPairRDD() - Method in class org.apache.spark.api.r.PairwiseRRDD
-
- asJavaRDD() - Method in class org.apache.spark.api.r.RRDD
-
- asJavaRDD() - Method in class org.apache.spark.api.r.StringRRDD
-
- ask(Object) - Method in interface org.apache.spark.api.plugin.PluginContext
-
Send an RPC to the plugin's driver-side component.
- asKeyValueIterator() - Method in class org.apache.spark.serializer.DeserializationStream
-
Read the elements of this stream through an iterator over key-value pairs.
- AskPermissionToCommitOutput - Class in org.apache.spark.scheduler
-
- AskPermissionToCommitOutput(int, int, int, int) - Constructor for class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- askRpcTimeout(SparkConf) - Static method in class org.apache.spark.util.RpcUtils
-
Returns the default Spark timeout to use for RPC ask operations.
- askStandaloneSchedulerToShutDownExecutorsError(Exception) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- askStorageEndpoints() - Method in class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
-
- askStorageEndpoints() - Method in class org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
-
- asML() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- asML() - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- asML() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Convert this matrix to the new mllib-local representation.
- asML() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- asML() - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- asML() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Convert this vector to the new mllib-local representation.
- asMultipart() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.FunctionIdentifierHelper
-
- asMultipartIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
-
- asNamespaceCatalog() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
-
- asNondeterministic() - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Updates UserDefinedFunction to nondeterministic.
- asNonNullable() - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Updates UserDefinedFunction to non-nullable.
- asNullable() - Method in class org.apache.spark.sql.types.ObjectType
-
- asRDDId() - Method in class org.apache.spark.storage.BlockId
-
- asSchema() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.ColumnsHelper
-
- assert_true(Column) - Static method in class org.apache.spark.sql.functions
-
Returns null if the condition is true, and throws an exception otherwise.
- assert_true(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns null if the condition is true; throws an exception with the error message otherwise.
- assertExceptionMsg(Throwable, String, boolean, ClassTag<E>) - Static method in class org.apache.spark.TestUtils
-
Asserts that exception message contains the message.
- assertNotSpilled(SparkContext, String, Function0<BoxedUnit>) - Static method in class org.apache.spark.TestUtils
-
Run some code involving jobs submitted to the given context and assert that the jobs
did not spill.
- assertSpilled(SparkContext, String, Function0<BoxedUnit>) - Static method in class org.apache.spark.TestUtils
-
Run some code involving jobs submitted to the given context and assert that the jobs spilled.
- assignClusters(Dataset<?>) - Method in class org.apache.spark.ml.clustering.PowerIterationClustering
-
Run the PIC algorithm and returns a cluster assignment for each input vertex.
- assignedAddrs() - Method in interface org.apache.spark.resource.ResourceAllocator
-
Sequence of currently assigned resource addresses.
- Assignment(long, int) - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
-
- Assignment$() - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment$
-
- assignments() - Method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel
-
- AssociationRules - Class in org.apache.spark.ml.fpm
-
- AssociationRules() - Constructor for class org.apache.spark.ml.fpm.AssociationRules
-
- associationRules() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
-
Get association rules fitted using the minConfidence.
- AssociationRules - Class in org.apache.spark.mllib.fpm
-
Generates association rules from a RDD[FreqItemset[Item}
.
- AssociationRules() - Constructor for class org.apache.spark.mllib.fpm.AssociationRules
-
Constructs a default instance with default parameters {minConfidence = 0.8}.
- AssociationRules.Rule<Item> - Class in org.apache.spark.mllib.fpm
-
An association rule between sets of items.
- asTableCatalog() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
-
- asTableIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper
-
- asTableIdentifier() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper
-
- AsTableIdentifier() - Method in interface org.apache.spark.sql.connector.catalog.LookupCatalog
-
- AsTableIdentifier() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier
-
- AsTableIdentifier$() - Constructor for class org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier$
-
- asTerms() - Static method in class org.apache.spark.ml.feature.Dot
-
- asTerms() - Static method in class org.apache.spark.ml.feature.EmptyTerm
-
- asTerms() - Method in interface org.apache.spark.ml.feature.Term
-
Default representation of a single Term as a part of summed terms.
- asTransform() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.BucketSpecHelper
-
- asTransforms() - Method in class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.PartitionTypeHelper
-
- AsyncEventQueue - Class in org.apache.spark.scheduler
-
An asynchronous queue for events.
- AsyncEventQueue(String, SparkConf, LiveListenerBusMetrics, LiveListenerBus) - Constructor for class org.apache.spark.scheduler.AsyncEventQueue
-
- AsyncRDDActions<T> - Class in org.apache.spark.rdd
-
A set of asynchronous RDD actions available through an implicit conversion.
- AsyncRDDActions(RDD<T>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.AsyncRDDActions
-
- atan(Column) - Static method in class org.apache.spark.sql.functions
-
- atan(String) - Static method in class org.apache.spark.sql.functions
-
- atan2(Column, Column) - Static method in class org.apache.spark.sql.functions
-
- atan2(Column, String) - Static method in class org.apache.spark.sql.functions
-
- atan2(String, Column) - Static method in class org.apache.spark.sql.functions
-
- atan2(String, String) - Static method in class org.apache.spark.sql.functions
-
- atan2(Column, double) - Static method in class org.apache.spark.sql.functions
-
- atan2(String, double) - Static method in class org.apache.spark.sql.functions
-
- atan2(double, Column) - Static method in class org.apache.spark.sql.functions
-
- atan2(double, String) - Static method in class org.apache.spark.sql.functions
-
- atanh(Column) - Static method in class org.apache.spark.sql.functions
-
- atanh(String) - Static method in class org.apache.spark.sql.functions
-
- attempt() - Method in class org.apache.spark.status.api.v1.TaskData
-
- ATTEMPT() - Static method in class org.apache.spark.status.TaskIndexNames
-
- ATTEMPT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskData
-
- ATTEMPT_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper
-
- ATTEMPT_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo
-
- ATTEMPT_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StageData
-
- attemptId() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- attemptId() - Method in interface org.apache.spark.status.api.v1.BaseAppResource
-
- attemptId() - Method in class org.apache.spark.status.api.v1.StageData
-
- attemptNumber() - Method in class org.apache.spark.BarrierTaskContext
-
- attemptNumber() - Method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- attemptNumber() - Method in class org.apache.spark.scheduler.StageInfo
-
- attemptNumber() - Method in class org.apache.spark.scheduler.TaskInfo
-
- attemptNumber() - Method in class org.apache.spark.TaskCommitDenied
-
- attemptNumber() - Method in class org.apache.spark.TaskContext
-
How many times this task has been attempted.
- attempts() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
-
- ATTEMPTS_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo
-
- AtTimestamp(Date) - Constructor for class org.apache.spark.streaming.kinesis.KinesisInitialPositions.AtTimestamp
-
- attr() - Method in class org.apache.spark.graphx.Edge
-
- attr() - Method in class org.apache.spark.graphx.EdgeContext
-
The attribute associated with the edge.
- attr() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- Attribute - Class in org.apache.spark.ml.attribute
-
Abstract class for ML attributes.
- Attribute() - Constructor for class org.apache.spark.ml.attribute.Attribute
-
- attribute() - Method in class org.apache.spark.sql.sources.EqualNullSafe
-
- attribute() - Method in class org.apache.spark.sql.sources.EqualTo
-
- attribute() - Method in class org.apache.spark.sql.sources.GreaterThan
-
- attribute() - Method in class org.apache.spark.sql.sources.GreaterThanOrEqual
-
- attribute() - Method in class org.apache.spark.sql.sources.In
-
- attribute() - Method in class org.apache.spark.sql.sources.IsNotNull
-
- attribute() - Method in class org.apache.spark.sql.sources.IsNull
-
- attribute() - Method in class org.apache.spark.sql.sources.LessThan
-
- attribute() - Method in class org.apache.spark.sql.sources.LessThanOrEqual
-
- attribute() - Method in class org.apache.spark.sql.sources.StringContains
-
- attribute() - Method in class org.apache.spark.sql.sources.StringEndsWith
-
- attribute() - Method in class org.apache.spark.sql.sources.StringStartsWith
-
- AttributeFactory - Interface in org.apache.spark.ml.attribute
-
Trait for ML attribute factories.
- AttributeGroup - Class in org.apache.spark.ml.attribute
-
Attributes that describe a vector ML column.
- AttributeGroup(String) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
-
Creates an attribute group without attribute info.
- AttributeGroup(String, int) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
-
Creates an attribute group knowing only the number of attributes.
- AttributeGroup(String, Attribute[]) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
-
Creates an attribute group with attributes.
- AttributeKeys - Class in org.apache.spark.ml.attribute
-
Keys used to store attributes.
- AttributeKeys() - Constructor for class org.apache.spark.ml.attribute.AttributeKeys
-
- attributeNameSyntaxError(String) - Static method in class org.apache.spark.sql.errors.DataTypeErrors
-
- attributeNameSyntaxError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- attributes() - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Optional array of attributes.
- ATTRIBUTES() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
-
- attributes() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
-
- attributes() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
-
- attributes() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- ATTRIBUTES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
- attributesForTypeUnsupportedError(ScalaReflection.Schema) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- AttributeType - Class in org.apache.spark.ml.attribute
-
An enum-like type for attribute types: AttributeType$.Numeric
, AttributeType$.Nominal
,
and AttributeType$.Binary
.
- AttributeType(String) - Constructor for class org.apache.spark.ml.attribute.AttributeType
-
- attrType() - Method in class org.apache.spark.ml.attribute.Attribute
-
Attribute type.
- attrType() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
-
- attrType() - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
- attrType() - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
- attrType() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
-
- available() - Method in class org.apache.spark.io.NioBufferedFileInputStream
-
- available() - Method in class org.apache.spark.io.ReadAheadInputStream
-
- available() - Method in class org.apache.spark.storage.BufferReleasingInputStream
-
- availableAddrs() - Method in interface org.apache.spark.resource.ResourceAllocator
-
Sequence of currently available resource addresses.
- AvailableNow() - Static method in class org.apache.spark.sql.streaming.Trigger
-
A trigger that processes all available data at the start of the query in one or multiple
batches, then terminates the query.
- Average() - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
-
- Avg - Class in org.apache.spark.sql.connector.expressions.aggregate
-
An aggregate function that returns the mean of all the values in a group.
- Avg(Expression, boolean) - Constructor for class org.apache.spark.sql.connector.expressions.aggregate.Avg
-
- avg(MapFunction<T, Double>) - Static method in class org.apache.spark.sql.expressions.javalang.typed
-
Deprecated.
Average aggregate function.
- avg(Function1<IN, Object>) - Static method in class org.apache.spark.sql.expressions.scalalang.typed
-
Deprecated.
Average aggregate function.
- avg(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the average of the values in a group.
- avg(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the average of the values in a group.
- avg(String...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute the mean value for each numeric columns for each group.
- avg(Seq<String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute the mean value for each numeric columns for each group.
- avg() - Method in class org.apache.spark.util.DoubleAccumulator
-
Returns the average of elements added to the accumulator.
- avg() - Method in class org.apache.spark.util.LongAccumulator
-
Returns the average of elements added to the accumulator.
- avgEventRate() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
-
- avgInputRate() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- avgLen() - Method in interface org.apache.spark.sql.connector.read.colstats.ColumnStatistics
-
- avgMetrics() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- avgProcessingTime() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- avgSchedulingDelay() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- avgTotalDelay() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- avroIncompatibleReadError(String, String, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- AvroMatchedField$() - Constructor for class org.apache.spark.sql.avro.AvroUtils.AvroMatchedField$
-
- AvroSchemaHelper(Schema, StructType, Seq<String>, Seq<String>, boolean) - Constructor for class org.apache.spark.sql.avro.AvroUtils.AvroSchemaHelper
-
- AvroUtils - Class in org.apache.spark.sql.avro
-
- AvroUtils() - Constructor for class org.apache.spark.sql.avro.AvroUtils
-
- AvroUtils.AvroMatchedField$ - Class in org.apache.spark.sql.avro
-
- AvroUtils.AvroSchemaHelper - Class in org.apache.spark.sql.avro
-
Helper class to perform field lookup/matching on Avro schemas.
- AvroUtils.RowReader - Interface in org.apache.spark.sql.avro
-
- awaitAnyTermination() - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Wait until any of the queries on the associated SQLContext has terminated since the
creation of the context, or since resetTerminated()
was called.
- awaitAnyTermination(long) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Wait until any of the queries on the associated SQLContext has terminated since the
creation of the context, or since resetTerminated()
was called.
- awaitReady(Awaitable<T>, Duration) - Static method in class org.apache.spark.util.ThreadUtils
-
Preferred alternative to Await.ready()
.
- awaitResult(Awaitable<T>, Duration) - Static method in class org.apache.spark.util.SparkThreadUtils
-
Preferred alternative to Await.result()
.
- awaitResult(Awaitable<T>, Duration) - Static method in class org.apache.spark.util.ThreadUtils
-
Preferred alternative to Await.result()
.
- awaitResult(Future<T>, Duration) - Static method in class org.apache.spark.util.ThreadUtils
-
- awaitTermination() - Method in interface org.apache.spark.sql.streaming.StreamingQuery
-
Waits for the termination of this
query, either by query.stop()
or by an exception.
- awaitTermination(long) - Method in interface org.apache.spark.sql.streaming.StreamingQuery
-
Waits for the termination of this
query, either by query.stop()
or by an exception.
- awaitTermination() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.
Wait for the execution to stop.
- awaitTermination() - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.
Wait for the execution to stop.
- awaitTerminationOrTimeout(long) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.
Wait for the execution to stop.
- awaitTerminationOrTimeout(long) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.
Wait for the execution to stop.
- axpy(double, Vector, Vector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
y += a * x
- axpy(double, Vector, Vector) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
y += a * x
- BACKUP_STANDALONE_MASTER_PREFIX() - Static method in class org.apache.spark.util.Utils
-
An identifier that backup masters use in their responses.
- balanceSlack() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
- barrier() - Method in class org.apache.spark.BarrierTaskContext
-
:: Experimental ::
Sets a global barrier and waits until all tasks in this stage hit this barrier.
- barrier() - Method in class org.apache.spark.rdd.RDD
-
:: Experimental ::
Marks the current stage as a barrier stage, where Spark must launch all tasks together.
- BARRIER() - Static method in class org.apache.spark.RequestMethod
-
- BARRIER_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode
-
- BarrierCoordinatorMessage - Interface in org.apache.spark
-
- barrierStageWithDynamicAllocationError() - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- barrierStageWithRDDChainPatternError() - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- BarrierTaskContext - Class in org.apache.spark
-
:: Experimental ::
A
TaskContext
with extra contextual info and tooling for tasks in a barrier stage.
- BarrierTaskInfo - Class in org.apache.spark
-
:: Experimental ::
Carries all task infos of a barrier task.
- base64(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the BASE64 encoding of a binary column and returns it as a string column.
- BaseAppResource - Interface in org.apache.spark.status.api.v1
-
Base class for resource handlers that use app-specific data.
- baseOn(ParamPair<?>...) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Sets the given parameters in this grid to fixed values.
- baseOn(ParamMap) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Sets the given parameters in this grid to fixed values.
- baseOn(Seq<ParamPair<?>>) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Sets the given parameters in this grid to fixed values.
- BaseReadWrite - Interface in org.apache.spark.ml.util
-
Trait for MLWriter
and MLReader
.
- BaseRelation - Class in org.apache.spark.sql.sources
-
Represents a collection of tuples with a known schema.
- BaseRelation() - Constructor for class org.apache.spark.sql.sources.BaseRelation
-
- baseRelationToDataFrame(BaseRelation) - Method in class org.apache.spark.sql.SparkSession
-
Convert a BaseRelation
created for external data sources into a DataFrame
.
- baseRelationToDataFrame(BaseRelation) - Method in class org.apache.spark.sql.SQLContext
-
- BaseRRDD<T,U> - Class in org.apache.spark.api.r
-
- BaseRRDD(RDD<T>, int, byte[], String, String, byte[], Broadcast<Object>[], ClassTag<T>, ClassTag<U>) - Constructor for class org.apache.spark.api.r.BaseRRDD
-
- BaseStreamingAppResource - Interface in org.apache.spark.status.api.v1.streaming
-
Base class for streaming API handlers, provides easy access to the streaming listener that
holds the app's information.
- BasicBlockReplicationPolicy - Class in org.apache.spark.storage
-
- BasicBlockReplicationPolicy() - Constructor for class org.apache.spark.storage.BasicBlockReplicationPolicy
-
- basicCredentials(String, String) - Method in class org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
-
Use a basic AWS keypair for long-lived authorization.
- basicSparkPage(HttpServletRequest, Function0<Seq<Node>>, String, boolean) - Static method in class org.apache.spark.ui.UIUtils
-
Returns a page with the spark css/js and a simple format.
- Batch - Interface in org.apache.spark.sql.connector.read
-
A physical representation of a data source scan for batch queries.
- BATCH_DURATION_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
- BATCH_ID_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress
-
- batchDuration() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
- batchDuration() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
-
- batchDuration() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- BATCHES() - Static method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
- batchId() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
- batchId() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
-
- BatchInfo - Class in org.apache.spark.status.api.v1.streaming
-
- BatchInfo - Class in org.apache.spark.streaming.scheduler
-
:: DeveloperApi ::
Class having information on completed batches.
- BatchInfo(Time, Map<Object, StreamInputInfo>, long, Option<Object>, Option<Object>, Map<Object, OutputOperationInfo>) - Constructor for class org.apache.spark.streaming.scheduler.BatchInfo
-
- batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
-
- batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
-
- batchInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
-
- batchInfos() - Method in class org.apache.spark.streaming.scheduler.StatsReportListener
-
- batchMetadataFileNotFoundError(Path) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- BatchStatus - Enum in org.apache.spark.status.api.v1.streaming
-
- batchTime() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
-
- batchTime() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
-
- batchTime() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
-
- BatchWrite - Interface in org.apache.spark.sql.connector.write
-
An interface that defines how to write the data to data source for batch processing.
- batchWriteCapabilityError(Table, String, String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- bbos() - Method in class org.apache.spark.storage.memory.SerializedValuesHolder
-
- bean(Class<T>) - Static method in class org.apache.spark.sql.Encoders
-
Creates an encoder for Java Bean of type T.
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
-
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
-
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.H2Dialect
-
- beforeFetch(Connection, Map<String, String>) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Override connection specific properties to run before a select is made.
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
-
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
-
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
-
- beforeFetch(Connection, Map<String, String>) - Static method in class org.apache.spark.sql.jdbc.TeradataDialect
-
- BernoulliCellSampler<T> - Class in org.apache.spark.util.random
-
:: DeveloperApi ::
A sampler based on Bernoulli trials for partitioning a data sequence.
- BernoulliCellSampler(double, double, boolean) - Constructor for class org.apache.spark.util.random.BernoulliCellSampler
-
- BernoulliSampler<T> - Class in org.apache.spark.util.random
-
:: DeveloperApi ::
A sampler based on Bernoulli trials.
- BernoulliSampler(double, ClassTag<T>) - Constructor for class org.apache.spark.util.random.BernoulliSampler
-
- bestModel() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- bestModel() - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- beta() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
The beta value, which controls precision vs recall weighting,
used in "weightedFMeasure"
, "fMeasureByLabel"
.
- beta() - Method in class org.apache.spark.mllib.random.WeibullGenerator
-
- between(Object, Object) - Method in class org.apache.spark.sql.Column
-
True if the current column is between the lower bound and upper bound, inclusive.
- bin(Column) - Static method in class org.apache.spark.sql.functions
-
An expression that returns the string representation of the binary value of the given long
column.
- bin(String) - Static method in class org.apache.spark.sql.functions
-
An expression that returns the string representation of the binary value of the given long
column.
- Binarizer - Class in org.apache.spark.ml.feature
-
Binarize a column of continuous features given a threshold.
- Binarizer(String) - Constructor for class org.apache.spark.ml.feature.Binarizer
-
- Binarizer() - Constructor for class org.apache.spark.ml.feature.Binarizer
-
- Binary() - Static method in class org.apache.spark.ml.attribute.AttributeType
-
Binary type.
- binary() - Method in class org.apache.spark.ml.feature.CountVectorizer
-
- binary() - Method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- binary() - Method in interface org.apache.spark.ml.feature.CountVectorizerParams
-
Binary toggle to control the output vector values.
- binary() - Method in class org.apache.spark.ml.feature.HashingTF
-
Binary toggle to control term frequency counts.
- binary() - Method in class org.apache.spark.sql.ColumnName
-
Creates a new StructField
of type binary.
- BINARY() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for arrays of bytes.
- binaryArithmeticCauseOverflowError(short, String, short) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- BinaryAttribute - Class in org.apache.spark.ml.attribute
-
A binary attribute.
- BinaryClassificationEvaluator - Class in org.apache.spark.ml.evaluation
-
Evaluator for binary classification, which expects input columns rawPrediction, label and
an optional weight column.
- BinaryClassificationEvaluator(String) - Constructor for class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- BinaryClassificationEvaluator() - Constructor for class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- BinaryClassificationMetricComputer - Interface in org.apache.spark.mllib.evaluation.binary
-
Trait for a binary classification evaluation metric computer.
- BinaryClassificationMetrics - Class in org.apache.spark.mllib.evaluation
-
Evaluator for binary classification.
- BinaryClassificationMetrics(RDD<? extends Product>, int) - Constructor for class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
- BinaryClassificationMetrics(RDD<Tuple2<Object, Object>>) - Constructor for class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Defaults numBins
to 0.
- BinaryClassificationSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for binary classification results for a given model.
- binaryColumn(String[]) - Static method in class org.apache.parquet.filter2.predicate.SparkFilterApi
-
- BinaryConfusionMatrix - Interface in org.apache.spark.mllib.evaluation.binary
-
Trait for a binary confusion matrix.
- binaryFiles(String, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Read a directory of binary files from HDFS, a local file system (available on all nodes),
or any Hadoop-supported file system URI as a byte array.
- binaryFiles(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Read a directory of binary files from HDFS, a local file system (available on all nodes),
or any Hadoop-supported file system URI as a byte array.
- binaryFiles(String, int) - Method in class org.apache.spark.SparkContext
-
Get an RDD for a Hadoop-readable dataset as PortableDataStream for each file
(useful for binary data)
- binaryLabelValidator() - Static method in class org.apache.spark.mllib.util.DataValidators
-
Function to check if labels used for classification are either zero or one.
- BinaryLogisticRegressionSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for binary logistic regression results for a given model.
- BinaryLogisticRegressionSummaryImpl - Class in org.apache.spark.ml.classification
-
Binary logistic regression results for a given model.
- BinaryLogisticRegressionSummaryImpl(Dataset<Row>, String, String, String, String, String) - Constructor for class org.apache.spark.ml.classification.BinaryLogisticRegressionSummaryImpl
-
- BinaryLogisticRegressionTrainingSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for binary logistic regression training results.
- BinaryLogisticRegressionTrainingSummaryImpl - Class in org.apache.spark.ml.classification
-
Binary logistic regression training results.
- BinaryLogisticRegressionTrainingSummaryImpl(Dataset<Row>, String, String, String, String, String, double[]) - Constructor for class org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummaryImpl
-
- BinaryRandomForestClassificationSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for BinaryRandomForestClassification results for a given model.
- BinaryRandomForestClassificationSummaryImpl - Class in org.apache.spark.ml.classification
-
Binary RandomForestClassification for a given model.
- BinaryRandomForestClassificationSummaryImpl(Dataset<Row>, String, String, String, String) - Constructor for class org.apache.spark.ml.classification.BinaryRandomForestClassificationSummaryImpl
-
- BinaryRandomForestClassificationTrainingSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for BinaryRandomForestClassification training results.
- BinaryRandomForestClassificationTrainingSummaryImpl - Class in org.apache.spark.ml.classification
-
Binary RandomForestClassification training results.
- BinaryRandomForestClassificationTrainingSummaryImpl(Dataset<Row>, String, String, String, String, double[]) - Constructor for class org.apache.spark.ml.classification.BinaryRandomForestClassificationTrainingSummaryImpl
-
- binaryRecords(String, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Load data from a flat binary file, assuming the length of each record is constant.
- binaryRecords(String, int, Configuration) - Method in class org.apache.spark.SparkContext
-
Load data from a flat binary file, assuming the length of each record is constant.
- binaryRecordsStream(String, int) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Deprecated.
Create an input stream that monitors a Hadoop-compatible filesystem
for new files and reads them as flat binary files with fixed record lengths,
yielding byte arrays
- binaryRecordsStream(String, int) - Method in class org.apache.spark.streaming.StreamingContext
-
Deprecated.
Create an input stream that monitors a Hadoop-compatible filesystem
for new files and reads them as flat binary files, assuming a fixed length per record,
generating one byte array per record.
- BinarySample - Class in org.apache.spark.mllib.stat.test
-
Class that represents the group and value of a sample.
- BinarySample(boolean, double) - Constructor for class org.apache.spark.mllib.stat.test.BinarySample
-
- binarySummary() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
Gets summary of model on training set.
- binarySummary() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
Gets summary of model on training set.
- BinaryType - Class in org.apache.spark.sql.types
-
The data type representing Array[Byte]
values.
- BinaryType() - Constructor for class org.apache.spark.sql.types.BinaryType
-
- BinaryType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the BinaryType object.
- bind(StructType) - Method in interface org.apache.spark.sql.connector.catalog.functions.UnboundFunction
-
Bind this function to an input type.
- Binomial$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
-
- BinomialBounds - Class in org.apache.spark.util.random
-
Utility functions that help us determine bounds on adjusted sampling rate to guarantee exact
sample size with high confidence when sampling without replacement.
- BinomialBounds() - Constructor for class org.apache.spark.util.random.BinomialBounds
-
- bins() - Method in interface org.apache.spark.sql.connector.read.colstats.Histogram
-
- BisectingKMeans - Class in org.apache.spark.ml.clustering
-
A bisecting k-means algorithm based on the paper "A comparison of document clustering techniques"
by Steinbach, Karypis, and Kumar, with modification to fit Spark.
- BisectingKMeans(String) - Constructor for class org.apache.spark.ml.clustering.BisectingKMeans
-
- BisectingKMeans() - Constructor for class org.apache.spark.ml.clustering.BisectingKMeans
-
- BisectingKMeans - Class in org.apache.spark.mllib.clustering
-
A bisecting k-means algorithm based on the paper "A comparison of document clustering techniques"
by Steinbach, Karypis, and Kumar, with modification to fit Spark.
- BisectingKMeans() - Constructor for class org.apache.spark.mllib.clustering.BisectingKMeans
-
Constructs with the default configuration
- BisectingKMeansModel - Class in org.apache.spark.ml.clustering
-
Model fitted by BisectingKMeans.
- BisectingKMeansModel - Class in org.apache.spark.mllib.clustering
-
- BisectingKMeansModel(ClusteringTreeNode) - Constructor for class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
- BisectingKMeansModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.clustering
-
- BisectingKMeansModel.SaveLoadV2_0$ - Class in org.apache.spark.mllib.clustering
-
- BisectingKMeansModel.SaveLoadV3_0$ - Class in org.apache.spark.mllib.clustering
-
- BisectingKMeansParams - Interface in org.apache.spark.ml.clustering
-
Common params for BisectingKMeans and BisectingKMeansModel
- BisectingKMeansSummary - Class in org.apache.spark.ml.clustering
-
Summary of BisectingKMeans.
- bit_and(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the bitwise AND of all non-null input values, or null if none.
- bit_count(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the number of bits that are set in the argument expr as an unsigned 64-bit integer,
or NULL if the argument is NULL.
- bit_get(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the bit (0 or 1) at the specified position.
- bit_length(Column) - Static method in class org.apache.spark.sql.functions
-
Calculates the bit length for the specified string column.
- bit_or(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the bitwise OR of all non-null input values, or null if none.
- bit_xor(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the bitwise XOR of all non-null input values, or null if none.
- bitmap_bit_position(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the bit position for the given input column.
- bitmap_bucket_number(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the bucket number for the given input column.
- bitmap_construct_agg(Column) - Static method in class org.apache.spark.sql.functions
-
Returns a bitmap with the positions of the bits set from all the values from the input column.
- bitmap_count(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the number of set bits in the input bitmap.
- bitmap_or_agg(Column) - Static method in class org.apache.spark.sql.functions
-
Returns a bitmap that is the bitwise OR of all of the bitmaps from the input column.
- bitSize() - Method in class org.apache.spark.util.sketch.BloomFilter
-
Returns the number of bits in the underlying bit array.
- bitwise_not(Column) - Static method in class org.apache.spark.sql.functions
-
Computes bitwise NOT (~) of a number.
- bitwiseAND(Object) - Method in class org.apache.spark.sql.Column
-
Compute bitwise AND of this expression with another expression.
- bitwiseNOT(Column) - Static method in class org.apache.spark.sql.functions
-
- bitwiseOR(Object) - Method in class org.apache.spark.sql.Column
-
Compute bitwise OR of this expression with another expression.
- bitwiseXOR(Object) - Method in class org.apache.spark.sql.Column
-
Compute bitwise XOR of this expression with another expression.
- BLACKLISTED_IN_STAGES_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary
-
- blacklistedInStages() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- BLAS - Class in org.apache.spark.ml.linalg
-
BLAS routines for MLlib's vectors and matrices.
- BLAS() - Constructor for class org.apache.spark.ml.linalg.BLAS
-
- BLAS - Class in org.apache.spark.mllib.linalg
-
BLAS routines for MLlib's vectors and matrices.
- BLAS() - Constructor for class org.apache.spark.mllib.linalg.BLAS
-
- BLOCK_NAME_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo
-
- BlockData - Interface in org.apache.spark.storage
-
Abstracts away how blocks are stored and provides different ways to read the underlying block
data.
- blockDoesNotExistError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- blockedByLock() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
-
- blockedByThreadId() - Method in class org.apache.spark.status.api.v1.ThreadStackTrace
-
- BlockEvictionHandler - Interface in org.apache.spark.storage.memory
-
- BlockGeneratorListener - Interface in org.apache.spark.streaming.receiver
-
Listener object for BlockGenerator events
- blockHaveBeenRemovedError(String) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- BlockId - Class in org.apache.spark.storage
-
:: DeveloperApi ::
Identifies a particular Block of data, usually associated with a single file.
- BlockId() - Constructor for class org.apache.spark.storage.BlockId
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetLocations
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetRDDBlockVisibility
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.MarkRDDBlockAsVisible
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveBlock
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
-
- blockId() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateRDDBlockTaskInfo
-
- blockId() - Method in class org.apache.spark.storage.BlockUpdatedInfo
-
- blockId() - Method in interface org.apache.spark.streaming.receiver.ReceivedBlockStoreResult
-
- blockIds() - Method in class org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds
-
- BlockInfoWrapper - Class in org.apache.spark.storage
-
- BlockInfoWrapper(BlockInfo, Lock, Condition) - Constructor for class org.apache.spark.storage.BlockInfoWrapper
-
- BlockInfoWrapper(BlockInfo, Lock) - Constructor for class org.apache.spark.storage.BlockInfoWrapper
-
- BlockLocationsAndStatus(Seq<BlockManagerId>, BlockStatus, Option<String[]>) - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
-
- BlockLocationsAndStatus$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus$
-
- blockManager() - Method in class org.apache.spark.SparkEnv
-
- blockManagerAddedFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockManagerAddedToJson(SparkListenerBlockManagerAdded, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- BlockManagerHeartbeat(BlockManagerId) - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
-
- BlockManagerHeartbeat$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat$
-
- blockManagerId() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
-
- blockManagerId() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
-
- BlockManagerId - Class in org.apache.spark.storage
-
:: DeveloperApi ::
This class represent a unique identifier for a BlockManager.
- BlockManagerId() - Constructor for class org.apache.spark.storage.BlockManagerId
-
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
-
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetPeers
-
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.GetReplicateInfoForRDDBlocks
-
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
-
- blockManagerId() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
-
- blockManagerId() - Method in class org.apache.spark.storage.BlockUpdatedInfo
-
- blockManagerIdCache() - Static method in class org.apache.spark.storage.BlockManagerId
-
The max cache size is hardcoded to 10000, since the size of a BlockManagerId
object is about 48B, the total memory cost should be below 1MB which is feasible.
- blockManagerIdFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockManagerIdToJson(BlockManagerId, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- BlockManagerMessages - Class in org.apache.spark.storage
-
- BlockManagerMessages() - Constructor for class org.apache.spark.storage.BlockManagerMessages
-
- BlockManagerMessages.BlockLocationsAndStatus - Class in org.apache.spark.storage
-
The response message of GetLocationsAndStatus
request.
- BlockManagerMessages.BlockLocationsAndStatus$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.BlockManagerHeartbeat - Class in org.apache.spark.storage
-
- BlockManagerMessages.BlockManagerHeartbeat$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.DecommissionBlockManager$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.DecommissionBlockManagers - Class in org.apache.spark.storage
-
- BlockManagerMessages.DecommissionBlockManagers$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetBlockStatus - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetBlockStatus$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetExecutorEndpointRef - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetExecutorEndpointRef$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetLocations - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetLocations$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetLocationsAndStatus - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetLocationsAndStatus$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetLocationsMultipleBlockIds - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetLocationsMultipleBlockIds$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetMatchingBlockIds - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetMatchingBlockIds$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetMemoryStatus$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetPeers - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetPeers$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetRDDBlockVisibility - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetRDDBlockVisibility$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetReplicateInfoForRDDBlocks - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetReplicateInfoForRDDBlocks$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetShufflePushMergerLocations - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetShufflePushMergerLocations$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.GetStorageStatus$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.IsExecutorAlive - Class in org.apache.spark.storage
-
- BlockManagerMessages.IsExecutorAlive$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.MarkRDDBlockAsVisible - Class in org.apache.spark.storage
-
- BlockManagerMessages.MarkRDDBlockAsVisible$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.RegisterBlockManager - Class in org.apache.spark.storage
-
- BlockManagerMessages.RegisterBlockManager$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveBlock - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveBlock$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveBroadcast - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveBroadcast$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveExecutor - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveExecutor$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveRdd - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveRdd$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveShuffle - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveShuffle$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveShufflePushMergerLocation - Class in org.apache.spark.storage
-
- BlockManagerMessages.RemoveShufflePushMergerLocation$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.ReplicateBlock - Class in org.apache.spark.storage
-
- BlockManagerMessages.ReplicateBlock$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.StopBlockManagerMaster$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.ToBlockManagerMaster - Interface in org.apache.spark.storage
-
- BlockManagerMessages.ToBlockManagerMasterStorageEndpoint - Interface in org.apache.spark.storage
-
- BlockManagerMessages.TriggerHeapHistogram$ - Class in org.apache.spark.storage
-
Driver to Executor message to get a heap histogram.
- BlockManagerMessages.TriggerThreadDump$ - Class in org.apache.spark.storage
-
Driver to Executor message to trigger a thread dump.
- BlockManagerMessages.UpdateBlockInfo - Class in org.apache.spark.storage
-
- BlockManagerMessages.UpdateBlockInfo$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.UpdateRDDBlockTaskInfo - Class in org.apache.spark.storage
-
- BlockManagerMessages.UpdateRDDBlockTaskInfo$ - Class in org.apache.spark.storage
-
- BlockManagerMessages.UpdateRDDBlockVisibility - Class in org.apache.spark.storage
-
- BlockManagerMessages.UpdateRDDBlockVisibility$ - Class in org.apache.spark.storage
-
- blockManagerRemovedFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockManagerRemovedToJson(SparkListenerBlockManagerRemoved, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- BlockMatrix - Class in org.apache.spark.mllib.linalg.distributed
-
Represents a distributed matrix in blocks of local matrices.
- BlockMatrix(RDD<Tuple2<Tuple2<Object, Object>, Matrix>>, int, int, long, long) - Constructor for class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
- BlockMatrix(RDD<Tuple2<Tuple2<Object, Object>, Matrix>>, int, int) - Constructor for class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Alternate constructor for BlockMatrix without the input of the number of rows and columns.
- blockName() - Method in class org.apache.spark.status.api.v1.RDDPartitionInfo
-
- blockName() - Method in class org.apache.spark.status.LiveRDDPartition
-
- blockNotFoundError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- BlockNotFoundException - Exception in org.apache.spark.storage
-
- BlockNotFoundException(String) - Constructor for exception org.apache.spark.storage.BlockNotFoundException
-
- BlockReplicationPolicy - Interface in org.apache.spark.storage
-
::DeveloperApi::
BlockReplicationPrioritization provides logic for prioritizing a sequence of peers for
replicating blocks.
- BlockReplicationUtils - Class in org.apache.spark.storage
-
- BlockReplicationUtils() - Constructor for class org.apache.spark.storage.BlockReplicationUtils
-
- blocks() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
- blockSize() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- blockSize() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- blockSize() - Method in interface org.apache.spark.ml.param.shared.HasBlockSize
-
Param for block size for stacking input data in matrices.
- blockSize() - Method in class org.apache.spark.ml.recommendation.ALS
-
- blockSize() - Method in class org.apache.spark.ml.recommendation.ALSModel
-
- BlockStatus - Class in org.apache.spark.storage
-
- BlockStatus(StorageLevel, long, long) - Constructor for class org.apache.spark.storage.BlockStatus
-
- blockStatusFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockStatusQueryReturnedNullError(BlockId) - Static method in class org.apache.spark.errors.SparkCoreErrors
-
- blockStatusToJson(BlockStatus, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockUpdatedInfo() - Method in class org.apache.spark.scheduler.SparkListenerBlockUpdated
-
- BlockUpdatedInfo - Class in org.apache.spark.storage
-
:: DeveloperApi ::
Stores information about a block status in a block manager.
- BlockUpdatedInfo(BlockManagerId, BlockId, StorageLevel, long, long) - Constructor for class org.apache.spark.storage.BlockUpdatedInfo
-
- blockUpdatedInfoFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockUpdatedInfoToJson(BlockUpdatedInfo, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockUpdateFromJson(JsonNode) - Static method in class org.apache.spark.util.JsonProtocol
-
- blockUpdateToJson(SparkListenerBlockUpdated, JsonGenerator) - Static method in class org.apache.spark.util.JsonProtocol
-
- bloomFilter(String, long, double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Builds a Bloom filter over a specified column.
- bloomFilter(Column, long, double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Builds a Bloom filter over a specified column.
- bloomFilter(String, long, long) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Builds a Bloom filter over a specified column.
- bloomFilter(Column, long, long) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Builds a Bloom filter over a specified column.
- BloomFilter - Class in org.apache.spark.util.sketch
-
A Bloom filter is a space-efficient probabilistic data structure that offers an approximate
containment test with one-sided error: if it claims that an item is contained in it, this
might be in error, but if it claims that an item is not contained in it, then this is
definitely true.
- BloomFilter() - Constructor for class org.apache.spark.util.sketch.BloomFilter
-
- BloomFilter.Version - Enum in org.apache.spark.util.sketch
-
- bmAddress() - Method in class org.apache.spark.FetchFailed
-
- bool_and(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns true if all values of e
are true.
- bool_or(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns true if at least one value of e
is true.
- BOOLEAN() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable boolean type.
- booleanColumn(String[]) - Static method in class org.apache.parquet.filter2.predicate.SparkFilterApi
-
- BooleanParam - Class in org.apache.spark.ml.param
-
Specialized version of Param[Boolean]
for Java.
- BooleanParam(String, String, String) - Constructor for class org.apache.spark.ml.param.BooleanParam
-
- BooleanParam(Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.BooleanParam
-
- BooleanType - Class in org.apache.spark.sql.types
-
The data type representing Boolean
values.
- BooleanType() - Constructor for class org.apache.spark.sql.types.BooleanType
-
- BooleanType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the BooleanType object.
- BooleanTypeExpression - Class in org.apache.spark.sql.types
-
- BooleanTypeExpression() - Constructor for class org.apache.spark.sql.types.BooleanTypeExpression
-
- boost(RDD<org.apache.spark.ml.feature.Instance>, RDD<org.apache.spark.ml.feature.Instance>, BoostingStrategy, boolean, long, String, Option<org.apache.spark.ml.util.Instrumentation>) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Internal method for performing regression using trees as base learners.
- BoostingStrategy - Class in org.apache.spark.mllib.tree.configuration
-
- BoostingStrategy(Strategy, Loss, int, double, double) - Constructor for class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- bootstrap() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- bootstrap() - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- bootstrap() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- bootstrap() - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- bootstrap() - Method in interface org.apache.spark.ml.tree.RandomForestParams
-
Whether bootstrap samples are used when building trees.
- Both() - Static method in class org.apache.spark.graphx.EdgeDirection
-
Edges originating from *and* arriving at a vertex of interest.
- boundaries() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
Boundaries in increasing order for which predictions are known.
- boundaries() - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
-
- BoundedDouble - Class in org.apache.spark.partial
-
A Double value with error bars and associated confidence.
- BoundedDouble(double, double, double, double) - Constructor for class org.apache.spark.partial.BoundedDouble
-
- BoundFunction - Interface in org.apache.spark.sql.connector.catalog.functions
-
Represents a function that is bound to an input type.
- BreezeUtil - Class in org.apache.spark.ml.ann
-
In-place DGEMM and DGEMV for Breeze
- BreezeUtil() - Constructor for class org.apache.spark.ml.ann.BreezeUtil
-
- broadcast(T) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Broadcast a read-only variable to the cluster, returning a
Broadcast
object for reading it in distributed functions.
- Broadcast<T> - Class in org.apache.spark.broadcast
-
A broadcast variable.
- Broadcast(long, ClassTag<T>) - Constructor for class org.apache.spark.broadcast.Broadcast
-
- broadcast(T, ClassTag<T>) - Method in class org.apache.spark.SparkContext
-
Broadcast a read-only variable to the cluster, returning a
Broadcast
object for reading it in distributed functions.
- broadcast(Dataset<T>) - Static method in class org.apache.spark.sql.functions
-
Marks a DataFrame as small enough for use in broadcast joins.
- BROADCAST() - Static method in class org.apache.spark.storage.BlockId
-
- BroadcastBlockId - Class in org.apache.spark.storage
-
- BroadcastBlockId(long, String) - Constructor for class org.apache.spark.storage.BroadcastBlockId
-
- broadcastCleaned(long) - Method in interface org.apache.spark.CleanerListener
-
- BroadcastFactory - Interface in org.apache.spark.broadcast
-
An interface for all the broadcast implementations in Spark (to allow
multiple broadcast implementations).
- broadcastId() - Method in class org.apache.spark.CleanBroadcast
-
- broadcastId() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
-
- broadcastId() - Method in class org.apache.spark.storage.BroadcastBlockId
-
- broadcastManager() - Method in class org.apache.spark.SparkEnv
-
- bround(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the column e
rounded to 0 decimal places with HALF_EVEN round mode.
- bround(Column, int) - Static method in class org.apache.spark.sql.functions
-
Round the value of e
to scale
decimal places with HALF_EVEN round mode
if scale
is greater than or equal to 0 or at integral part when scale
is less than 0.
- btrim(Column) - Static method in class org.apache.spark.sql.functions
-
Removes the leading and trailing space characters from str
.
- btrim(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Remove the leading and trailing trim
characters from str
.
- bucket(int, String...) - Static method in class org.apache.spark.sql.connector.expressions.Expressions
-
Create a bucket transform for one or more columns.
- bucket(int, NamedReference[]) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
-
- bucket(int, NamedReference[], NamedReference[]) - Static method in class org.apache.spark.sql.connector.expressions.LogicalExpressions
-
- bucket(Column, Column) - Static method in class org.apache.spark.sql.functions
-
A transform for any type that partitions by a hash of the input column.
- bucket(int, Column) - Static method in class org.apache.spark.sql.functions
-
A transform for any type that partitions by a hash of the input column.
- bucketBy(int, String, String...) - Method in class org.apache.spark.sql.DataFrameWriter
-
Buckets the output by the given columns.
- bucketBy(int, String, Seq<String>) - Method in class org.apache.spark.sql.DataFrameWriter
-
Buckets the output by the given columns.
- bucketByAndSortByUnsupportedByOperationError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- bucketByUnsupportedByOperationError(String) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- BucketedRandomProjectionLSH - Class in org.apache.spark.ml.feature
-
- BucketedRandomProjectionLSH(String) - Constructor for class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- BucketedRandomProjectionLSH() - Constructor for class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- BucketedRandomProjectionLSHModel - Class in org.apache.spark.ml.feature
-
- BucketedRandomProjectionLSHParams - Interface in org.apache.spark.ml.feature
-
- bucketingColumnCannotBePartOfPartitionColumnsError(String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- Bucketizer - Class in org.apache.spark.ml.feature
-
Bucketizer
maps a column of continuous features to a column of feature buckets.
- Bucketizer(String) - Constructor for class org.apache.spark.ml.feature.Bucketizer
-
- Bucketizer() - Constructor for class org.apache.spark.ml.feature.Bucketizer
-
- bucketLength() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- bucketLength() - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- bucketLength() - Method in interface org.apache.spark.ml.feature.BucketedRandomProjectionLSHParams
-
The length of each hash bucket, a larger bucket lowers the false negative rate.
- bucketSortingColumnCannotBePartOfPartitionColumnsError(String, Seq<String>) - Static method in class org.apache.spark.sql.errors.QueryCompilationErrors
-
- BucketSpecHelper(BucketSpec) - Constructor for class org.apache.spark.sql.connector.catalog.CatalogV2Implicits.BucketSpecHelper
-
- buffer() - Method in class org.apache.spark.storage.memory.SerializedMemoryEntry
-
- bufferEncoder() - Method in class org.apache.spark.ml.feature.StringIndexerAggregator
-
- bufferEncoder() - Method in class org.apache.spark.sql.expressions.Aggregator
-
Specifies the Encoder
for the intermediate value type.
- BufferReleasingInputStream - Class in org.apache.spark.storage
-
Helper class that ensures a ManagedBuffer is released upon InputStream.close() and
also detects stream corruption if streamCompressedOrEncrypted is true
- BufferReleasingInputStream(InputStream, ShuffleBlockFetcherIterator, BlockId, int, BlockManagerId, boolean, boolean, Option<CheckedInputStream>) - Constructor for class org.apache.spark.storage.BufferReleasingInputStream
-
- bufferSchema() - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Deprecated.
A StructType
represents data types of values in the aggregation buffer.
- build(Node, int) - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData$
-
- build(DecisionTreeModel, int) - Method in class org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData$
-
- build() - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Builds and returns all combinations of parameters specified by the param grid.
- build() - Method in class org.apache.spark.resource.ResourceProfileBuilder
-
- build() - Method in interface org.apache.spark.sql.connector.read.ScanBuilder
-
- build(Expression) - Method in class org.apache.spark.sql.connector.util.V2ExpressionSQLBuilder
-
- build() - Method in interface org.apache.spark.sql.connector.write.DeltaWriteBuilder
-
- build() - Method in interface org.apache.spark.sql.connector.write.RowLevelOperationBuilder
-
Returns a
RowLevelOperation
that controls how Spark rewrites data
for DELETE, UPDATE, MERGE commands.
- build() - Method in interface org.apache.spark.sql.connector.write.WriteBuilder
-
Returns a logical
Write
shared between batch and streaming.
- build() - Method in class org.apache.spark.sql.jdbc.DB2Dialect.DB2SQLQueryBuilder
-
- build() - Method in class org.apache.spark.sql.jdbc.JdbcSQLQueryBuilder
-
Build the final SQL query that following dialect's SQL syntax.
- build(Expression) - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect.MsSqlServerSQLBuilder
-
- build() - Method in class org.apache.spark.sql.jdbc.MsSqlServerDialect.MsSqlServerSQLQueryBuilder
-
- build() - Method in class org.apache.spark.sql.jdbc.MySQLDialect.MySQLSQLQueryBuilder
-
- build() - Method in class org.apache.spark.sql.jdbc.OracleDialect.OracleSQLQueryBuilder
-
- build() - Method in class org.apache.spark.sql.types.MetadataBuilder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
- build() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
- build() - Method in interface org.apache.spark.storage.memory.MemoryEntryBuilder
-
- build() - Method in class org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
-
- builder() - Static method in class org.apache.spark.sql.SparkSession
-
- Builder() - Constructor for class org.apache.spark.sql.SparkSession.Builder
-
- Builder() - Constructor for class org.apache.spark.streaming.kinesis.SparkAWSCredentials.Builder
-
- buildErrorResponse(Response.Status, String) - Static method in class org.apache.spark.ui.UIUtils
-
- buildFilter(Seq<Expression>, Seq<Attribute>) - Method in interface org.apache.spark.sql.columnar.CachedBatchSerializer
-
Builds a function that can be used to filter batches prior to being decompressed.
- buildFilter(Seq<Expression>, Seq<Attribute>) - Method in class org.apache.spark.sql.columnar.SimpleMetricsCachedBatchSerializer
-
- buildForBatch() - Method in interface org.apache.spark.sql.connector.write.WriteBuilder
-
- buildForStreaming() - Method in interface org.apache.spark.sql.connector.write.WriteBuilder
-
- buildLocationMetadata(Seq<Path>, int) - Static method in class org.apache.spark.util.Utils
-
Convert a sequence of Path
s to a metadata string.
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.AccumulableInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationAttemptInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationEnvironmentInfoWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ApplicationInfoWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.AppSummary.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorMetricsDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorPeakMetricsDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorResourceRequest.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummary.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorStageSummaryWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummary.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ExecutorSummaryWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobData.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.JobDataWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.MemoryMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.PairStrings.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.PoolData.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummary.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ProcessSummaryWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDDataDistribution.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationClusterWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationEdge.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationGraphWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDOperationNode.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDPartitionInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RDDStorageInfoWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceInformation.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ResourceProfileWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.RuntimeInfo.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetricDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShufflePushReadMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetricDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleReadMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetricDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SinkProgress.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SourceProgress.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphClusterWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphEdge.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNode.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphNodeWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SparkPlanGraphWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummary.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SpeculationStageSummaryWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLExecutionUIData.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.SQLPlanMetric.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageData.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StageDataWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StateOperatorProgress.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamBlockData.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryData.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgress.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.StreamingQueryProgressWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskData.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskDataWrapper.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetricDistributions.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskMetrics.Builder
-
- buildPartial() - Method in class org.apache.spark.status.protobuf.StoreTypes.TaskResourceRequest.Builder
-
- buildPlanNormalizationRules(SparkSession) - Method in class org.apache.spark.sql.SparkSessionExtensions
-
- buildPools() - Method in interface org.apache.spark.scheduler.SchedulableBuilder
-
- buildReaderUnsupportedForFileFormatError(String) - Static method in class org.apache.spark.sql.errors.QueryExecutionErrors
-
- buildScan(Seq<Attribute>, Seq<Expression>) - Method in interface org.apache.spark.sql.sources.CatalystScan
-
- buildScan(String[], Filter[]) - Method in interface org.apache.spark.sql.sources.PrunedFilteredScan
-
- buildScan(String[]) - Method in interface org.apache.spark.sql.sources.PrunedScan
-
- buildScan() - Method in interface org.apache.spark.sql.sources.TableScan
-
- buildTreeFromNodes(DecisionTreeModelReadWrite.NodeData[], String) - Static method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite
-
Given all data for all nodes in a tree, rebuild the tree.
- BYTE() - Static method in class org.apache.spark.api.r.SerializationFormats
-
- BYTE() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable byte type.
- BytecodeUtils - Class in org.apache.spark.graphx.util
-
Includes an utility function to test whether a function accesses a specific attribute
of an object.
- BytecodeUtils() - Constructor for class org.apache.spark.graphx.util.BytecodeUtils
-
- ByteExactNumeric - Class in org.apache.spark.sql.types
-
- ByteExactNumeric() - Constructor for class org.apache.spark.sql.types.ByteExactNumeric
-
- BYTES_READ() - Method in class org.apache.spark.InternalAccumulator.input$
-
- BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
- BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.InputMetricDistributions
-
- BYTES_READ_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.InputMetrics
-
- BYTES_WRITTEN() - Method in class org.apache.spark.InternalAccumulator.output$
-
- BYTES_WRITTEN() - Method in class org.apache.spark.InternalAccumulator.shuffleWrite$
-
- BYTES_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.CachedQuantile
-
- BYTES_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.OutputMetricDistributions
-
- BYTES_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.OutputMetrics
-
- BYTES_WRITTEN_FIELD_NUMBER - Static variable in class org.apache.spark.status.protobuf.StoreTypes.ShuffleWriteMetrics
-
- bytesRead() - Method in class org.apache.spark.status.api.v1.InputMetricDistributions
-
- bytesRead() - Method in class org.apache.spark.status.api.v1.InputMetrics
-
- bytesToString(long) - Static method in class org.apache.spark.util.Utils
-
Convert a quantity in bytes to a human-readable string such as "4.0 MiB".
- bytesToString(BigInt) - Static method in class org.apache.spark.util.Utils
-
- byteStringAsBytes(String) - Static method in class org.apache.spark.util.Utils
-
Convert a passed byte string (e.g.
- byteStringAsGb(String) - Static method in class org.apache.spark.util.Utils
-
Convert a passed byte string (e.g.
- byteStringAsKb(String) - Static method in class org.apache.spark.util.Utils
-
Convert a passed byte string (e.g.
- byteStringAsMb(String) - Static method in class org.apache.spark.util.Utils
-
Convert a passed byte string (e.g.
- bytesWritten() - Method in class org.apache.spark.status.api.v1.OutputMetricDistributions
-
- bytesWritten() - Method in class org.apache.spark.status.api.v1.OutputMetrics
-
- bytesWritten() - Method in class org.apache.spark.status.api.v1.ShuffleWriteMetrics
-
- bytesWritten(long) - Method in interface org.apache.spark.util.logging.RollingPolicy
-
Notify that bytes have been written
- ByteType - Class in org.apache.spark.sql.types
-
The data type representing Byte
values.
- ByteType() - Constructor for class org.apache.spark.sql.types.ByteType
-
- ByteType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the ByteType object.
- ByteTypeExpression - Class in org.apache.spark.sql.types
-
- ByteTypeExpression() - Constructor for class org.apache.spark.sql.types.ByteTypeExpression
-