Package

org.apache

spark

Permalink

package spark

Core Spark functionality. org.apache.spark.SparkContext serves as the main entry point to Spark, while org.apache.spark.rdd.RDD is the data type representing a distributed collection, and provides most parallel operations.

In addition, org.apache.spark.rdd.PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; org.apache.spark.rdd.DoubleRDDFunctions contains operations available only on RDDs of Doubles; and org.apache.spark.rdd.SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. These operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)] through implicit conversions.

Java programmers should reference the org.apache.spark.api.java package for Spark programming APIs in Java.

Classes and methods marked with Experimental are user-facing features which have not been officially adopted by the Spark project. These are subject to change or removal in minor releases.

Classes and methods marked with Developer API are intended for advanced users want to extend Spark through lower level interfaces. These are subject to changes or removal in minor releases.

Source
package.scala
Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. spark
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. case class Aggregator[K, V, C](createCombiner: (V) ⇒ C, mergeValue: (C, V) ⇒ C, mergeCombiners: (C, C) ⇒ C) extends Product with Serializable

    Permalink

    :: DeveloperApi :: A set of functions used to aggregate data.

    :: DeveloperApi :: A set of functions used to aggregate data.

    createCombiner

    function to create the initial value of the aggregation.

    mergeValue

    function to merge a new value into the aggregation result.

    mergeCombiners

    function to merge outputs from multiple mergeValue function.

    Annotations
    @DeveloperApi()
  2. class BarrierTaskContext extends TaskContext with Logging

    Permalink

    :: Experimental :: A TaskContext with extra contextual info and tooling for tasks in a barrier stage.

    :: Experimental :: A TaskContext with extra contextual info and tooling for tasks in a barrier stage. Use BarrierTaskContext#get to obtain the barrier context for a running barrier task.

    Annotations
    @Experimental() @Since( "2.4.0" )
  3. class BarrierTaskInfo extends AnyRef

    Permalink

    :: Experimental :: Carries all task infos of a barrier task.

    :: Experimental :: Carries all task infos of a barrier task.

    Annotations
    @Experimental() @Since( "2.4.0" )
  4. class ComplexFutureAction[T] extends FutureAction[T]

    Permalink

    A FutureAction for actions that could trigger multiple Spark jobs.

    A FutureAction for actions that could trigger multiple Spark jobs. Examples include take, takeSample. Cancellation works by setting the cancelled flag to true and cancelling any pending jobs.

    Annotations
    @DeveloperApi()
  5. abstract class Dependency[T] extends Serializable

    Permalink

    :: DeveloperApi :: Base class for dependencies.

    :: DeveloperApi :: Base class for dependencies.

    Annotations
    @DeveloperApi()
  6. case class ExceptionFailure(className: String, description: String, stackTrace: Array[StackTraceElement], fullStackTrace: String, exceptionWrapper: Option[ThrowableSerializationWrapper], accumUpdates: Seq[AccumulableInfo] = Seq.empty, accums: Seq[AccumulatorV2[_, _]] = Nil) extends TaskFailedReason with Product with Serializable

    Permalink

    :: DeveloperApi :: Task failed due to a runtime exception.

    :: DeveloperApi :: Task failed due to a runtime exception. This is the most common failure case and also captures user program exceptions.

    stackTrace contains the stack trace of the exception itself. It still exists for backward compatibility. It's better to use this(e: Throwable, metrics: Option[TaskMetrics]) to create ExceptionFailure as it will handle the backward compatibility properly.

    fullStackTrace is a better representation of the stack trace because it contains the whole stack trace including the exception and its causes

    exception is the actual exception that caused the task to fail. It may be None in the case that the exception is not in fact serializable. If a task fails more than once (due to retries), exception is that one that caused the last failure.

    Annotations
    @DeveloperApi()
  7. case class ExecutorLostFailure(execId: String, exitCausedByApp: Boolean = true, reason: Option[String]) extends TaskFailedReason with Product with Serializable

    Permalink

    :: DeveloperApi :: The task failed because the executor that it was running on was lost.

    :: DeveloperApi :: The task failed because the executor that it was running on was lost. This may happen because the task crashed the JVM.

    Annotations
    @DeveloperApi()
  8. trait ExecutorPlugin extends AnyRef

    Permalink
  9. case class FetchFailed(bmAddress: BlockManagerId, shuffleId: Int, mapId: Int, reduceId: Int, message: String) extends TaskFailedReason with Product with Serializable

    Permalink

    :: DeveloperApi :: Task failed to fetch shuffle data from a remote node.

    :: DeveloperApi :: Task failed to fetch shuffle data from a remote node. Probably means we have lost the remote executors the task is trying to fetch from, and thus need to rerun the previous stage.

    Annotations
    @DeveloperApi()
  10. trait FutureAction[T] extends Future[T]

    Permalink

    A future for the result of an action to support cancellation.

    A future for the result of an action to support cancellation. This is an extension of the Scala Future interface to support cancellation.

  11. class HashPartitioner extends Partitioner

    Permalink

    A org.apache.spark.Partitioner that implements hash-based partitioning using Java's Object.hashCode.

    A org.apache.spark.Partitioner that implements hash-based partitioning using Java's Object.hashCode.

    Java arrays have hashCodes that are based on the arrays' identities rather than their contents, so attempting to partition an RDD[Array[_]] or RDD[(Array[_], _)] using a HashPartitioner will produce an unexpected or incorrect result.

  12. class InterruptibleIterator[+T] extends Iterator[T]

    Permalink

    :: DeveloperApi :: An iterator that wraps around an existing iterator to provide task killing functionality.

    :: DeveloperApi :: An iterator that wraps around an existing iterator to provide task killing functionality. It works by checking the interrupted flag in TaskContext.

    Annotations
    @DeveloperApi()
  13. final class JobExecutionStatus extends Enum[JobExecutionStatus]

    Permalink
  14. trait JobSubmitter extends AnyRef

    Permalink

    Handle via which a "run" function passed to a ComplexFutureAction can submit jobs for execution.

    Handle via which a "run" function passed to a ComplexFutureAction can submit jobs for execution.

    Annotations
    @DeveloperApi()
  15. abstract class NarrowDependency[T] extends Dependency[T]

    Permalink

    :: DeveloperApi :: Base class for dependencies where each partition of the child RDD depends on a small number of partitions of the parent RDD.

    :: DeveloperApi :: Base class for dependencies where each partition of the child RDD depends on a small number of partitions of the parent RDD. Narrow dependencies allow for pipelined execution.

    Annotations
    @DeveloperApi()
  16. class OneToOneDependency[T] extends NarrowDependency[T]

    Permalink

    :: DeveloperApi :: Represents a one-to-one dependency between partitions of the parent and child RDDs.

    :: DeveloperApi :: Represents a one-to-one dependency between partitions of the parent and child RDDs.

    Annotations
    @DeveloperApi()
  17. trait Partition extends Serializable

    Permalink

    An identifier for a partition in an RDD.

  18. abstract class Partitioner extends Serializable

    Permalink

    An object that defines how the elements in a key-value pair RDD are partitioned by key.

    An object that defines how the elements in a key-value pair RDD are partitioned by key. Maps each key to a partition ID, from 0 to numPartitions - 1.

    Note that, partitioner must be deterministic, i.e. it must return the same partition id given the same partition key.

  19. class RangeDependency[T] extends NarrowDependency[T]

    Permalink

    :: DeveloperApi :: Represents a one-to-one dependency between ranges of partitions in the parent and child RDDs.

    :: DeveloperApi :: Represents a one-to-one dependency between ranges of partitions in the parent and child RDDs.

    Annotations
    @DeveloperApi()
  20. class RangePartitioner[K, V] extends Partitioner

    Permalink

    A org.apache.spark.Partitioner that partitions sortable records by range into roughly equal ranges.

    A org.apache.spark.Partitioner that partitions sortable records by range into roughly equal ranges. The ranges are determined by sampling the content of the RDD passed in.

    Note

    The actual number of partitions created by the RangePartitioner might not be the same as the partitions parameter, in the case where the number of sampled records is less than the value of partitions.

  21. class SerializableWritable[T <: Writable] extends Serializable

    Permalink
    Annotations
    @DeveloperApi()
  22. class ShuffleDependency[K, V, C] extends Dependency[Product2[K, V]]

    Permalink

    :: DeveloperApi :: Represents a dependency on the output of a shuffle stage.

    :: DeveloperApi :: Represents a dependency on the output of a shuffle stage. Note that in the case of shuffle, the RDD is transient since we don't need it on the executor side.

    Annotations
    @DeveloperApi()
  23. class SimpleFutureAction[T] extends FutureAction[T]

    Permalink

    A FutureAction holding the result of an action that triggers a single job.

    A FutureAction holding the result of an action that triggers a single job. Examples include count, collect, reduce.

    Annotations
    @DeveloperApi()
  24. class SparkConf extends Cloneable with Logging with Serializable

    Permalink

    Configuration for a Spark application.

    Configuration for a Spark application. Used to set various Spark parameters as key-value pairs.

    Most of the time, you would create a SparkConf object with new SparkConf(), which will load values from any spark.* Java system properties set in your application as well. In this case, parameters you set directly on the SparkConf object take priority over system properties.

    For unit tests, you can also call new SparkConf(false) to skip loading external settings and get the same configuration no matter what the system properties are.

    All setter methods in this class support chaining. For example, you can write new SparkConf().setMaster("local").setAppName("My app").

    Note

    Once a SparkConf object is passed to Spark, it is cloned and can no longer be modified by the user. Spark does not support modifying the configuration at runtime.

  25. class SparkContext extends Logging

    Permalink

    Main entry point for Spark functionality.

    Main entry point for Spark functionality. A SparkContext represents the connection to a Spark cluster, and can be used to create RDDs, accumulators and broadcast variables on that cluster.

    Only one SparkContext may be active per JVM. You must stop() the active SparkContext before creating a new one. This limitation may eventually be removed; see SPARK-2243 for more details.

  26. class SparkEnv extends Logging

    Permalink

    :: DeveloperApi :: Holds all the runtime environment objects for a running Spark instance (either master or worker), including the serializer, RpcEnv, block manager, map output tracker, etc.

    :: DeveloperApi :: Holds all the runtime environment objects for a running Spark instance (either master or worker), including the serializer, RpcEnv, block manager, map output tracker, etc. Currently Spark code finds the SparkEnv through a global variable, so all the threads can access the same SparkEnv. It can be accessed by SparkEnv.get (e.g. after creating a SparkContext).

    NOTE: This is not intended for external use. This is exposed for Shark and may be made private in a future release.

    Annotations
    @DeveloperApi()
  27. class SparkException extends Exception

    Permalink
  28. trait SparkExecutorInfo extends Serializable

    Permalink
  29. class SparkFirehoseListener extends SparkListenerInterface

    Permalink
  30. trait SparkJobInfo extends Serializable

    Permalink
  31. trait SparkStageInfo extends Serializable

    Permalink
  32. class SparkStatusTracker extends AnyRef

    Permalink

    Low-level status reporting APIs for monitoring job and stage progress.

    Low-level status reporting APIs for monitoring job and stage progress.

    These APIs intentionally provide very weak consistency semantics; consumers of these APIs should be prepared to handle empty / missing information. For example, a job's stage ids may be known but the status API may not have any information about the details of those stages, so getStageInfo could potentially return None for a valid stage id.

    To limit memory usage, these APIs only provide information on recent jobs / stages. These APIs will provide information for the last spark.ui.retainedStages stages and spark.ui.retainedJobs jobs.

    NOTE: this class's constructor should be considered private and may be subject to change.

  33. case class TaskCommitDenied(jobID: Int, partitionID: Int, attemptNumber: Int) extends TaskFailedReason with Product with Serializable

    Permalink

    :: DeveloperApi :: Task requested the driver to commit, but was denied.

    :: DeveloperApi :: Task requested the driver to commit, but was denied.

    Annotations
    @DeveloperApi()
  34. abstract class TaskContext extends Serializable

    Permalink

    Contextual information about a task which can be read or mutated during execution.

    Contextual information about a task which can be read or mutated during execution. To access the TaskContext for a running task, use:

    org.apache.spark.TaskContext.get()
  35. sealed trait TaskEndReason extends AnyRef

    Permalink

    :: DeveloperApi :: Various possible reasons why a task ended.

    :: DeveloperApi :: Various possible reasons why a task ended. The low-level TaskScheduler is supposed to retry tasks several times for "ephemeral" failures, and only report back failures that require some old stages to be resubmitted, such as shuffle map fetch failures.

    Annotations
    @DeveloperApi()
  36. sealed trait TaskFailedReason extends TaskEndReason

    Permalink

    :: DeveloperApi :: Various possible reasons why a task failed.

    :: DeveloperApi :: Various possible reasons why a task failed.

    Annotations
    @DeveloperApi()
  37. case class TaskKilled(reason: String, accumUpdates: Seq[AccumulableInfo] = Seq.empty, accums: Seq[AccumulatorV2[_, _]] = Nil) extends TaskFailedReason with Product with Serializable

    Permalink

    :: DeveloperApi :: Task was killed intentionally and needs to be rescheduled.

    :: DeveloperApi :: Task was killed intentionally and needs to be rescheduled.

    Annotations
    @DeveloperApi()
  38. class TaskKilledException extends RuntimeException

    Permalink

    :: DeveloperApi :: Exception thrown when a task is explicitly killed (i.e., task failure is expected).

    :: DeveloperApi :: Exception thrown when a task is explicitly killed (i.e., task failure is expected).

    Annotations
    @DeveloperApi()
  39. class Accumulable[R, T] extends Serializable

    Permalink

    A data type that can be accumulated, i.e.

    A data type that can be accumulated, i.e. has a commutative and associative "add" operation, but where the result type, R, may be different from the element type being added, T.

    You must define how to add data, and how to merge two of these together. For some data types, such as a counter, these might be the same operation. In that case, you can use the simpler org.apache.spark.Accumulator. They won't always be the same, though -- e.g., imagine you are accumulating a set. You will add items to the set, and you will union two sets together.

    Operations are not thread-safe.

    R

    the full accumulated data (result type)

    T

    partial data that can be added in

    Annotations
    @deprecated
    Deprecated

    (Since version 2.0.0) use AccumulatorV2

  40. trait AccumulableParam[R, T] extends Serializable

    Permalink

    Helper object defining how to accumulate values of a particular type.

    Helper object defining how to accumulate values of a particular type. An implicit AccumulableParam needs to be available when you create Accumulables of a specific type.

    R

    the full accumulated data (result type)

    T

    partial data that can be added in

    Annotations
    @deprecated
    Deprecated

    (Since version 2.0.0) use AccumulatorV2

  41. class Accumulator[T] extends Accumulable[T, T]

    Permalink

    A simpler value of Accumulable where the result type being accumulated is the same as the types of elements being merged, i.e.

    A simpler value of Accumulable where the result type being accumulated is the same as the types of elements being merged, i.e. variables that are only "added" to through an associative and commutative operation and can therefore be efficiently supported in parallel. They can be used to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of numeric value types, and programmers can add support for new types.

    An accumulator is created from an initial value v by calling SparkContext.accumulator. Tasks running on the cluster can then add to it using the += operator. However, they cannot read its value. Only the driver program can read the accumulator's value, using its #value method.

    The interpreter session below shows an accumulator being used to add up the elements of an array:

    scala> val accum = sc.accumulator(0)
    accum: org.apache.spark.Accumulator[Int] = 0
    
    scala> sc.parallelize(Array(1, 2, 3, 4)).foreach(x => accum += x)
    ...
    10/09/29 18:41:08 INFO SparkContext: Tasks finished in 0.317106 s
    
    scala> accum.value
    res2: Int = 10
    T

    result type

    Annotations
    @deprecated
    Deprecated

    (Since version 2.0.0) use AccumulatorV2

  42. trait AccumulatorParam[T] extends AccumulableParam[T, T]

    Permalink

    A simpler version of org.apache.spark.AccumulableParam where the only data type you can add in is the same type as the accumulated value.

    A simpler version of org.apache.spark.AccumulableParam where the only data type you can add in is the same type as the accumulated value. An implicit AccumulatorParam object needs to be available when you create Accumulators of a specific type.

    T

    type of value to accumulate

    Annotations
    @deprecated
    Deprecated

    (Since version 2.0.0) use AccumulatorV2

Value Members

  1. object BarrierTaskContext extends Serializable

    Permalink
    Annotations
    @Experimental() @Since( "2.4.0" )
  2. object Partitioner extends Serializable

    Permalink
  3. object Resubmitted extends TaskFailedReason with Product with Serializable

    Permalink

    :: DeveloperApi :: A org.apache.spark.scheduler.ShuffleMapTask that completed successfully earlier, but we lost the executor before the stage completed.

    :: DeveloperApi :: A org.apache.spark.scheduler.ShuffleMapTask that completed successfully earlier, but we lost the executor before the stage completed. This means Spark needs to reschedule the task to be re-executed on a different executor.

    Annotations
    @DeveloperApi()
  4. val SPARK_BRANCH: String

    Permalink
  5. val SPARK_BUILD_DATE: String

    Permalink
  6. val SPARK_BUILD_USER: String

    Permalink
  7. val SPARK_REPO_URL: String

    Permalink
  8. val SPARK_REVISION: String

    Permalink
  9. val SPARK_VERSION: String

    Permalink
  10. object SparkContext extends Logging

    Permalink

    The SparkContext object contains a number of implicit conversions and parameters for use with various Spark features.

  11. object SparkEnv extends Logging

    Permalink
  12. object SparkFiles

    Permalink

    Resolves paths to files added through SparkContext.addFile().

  13. object Success extends TaskEndReason with Product with Serializable

    Permalink

    :: DeveloperApi :: Task succeeded.

    :: DeveloperApi :: Task succeeded.

    Annotations
    @DeveloperApi()
  14. object TaskContext extends Serializable

    Permalink
  15. object TaskResultLost extends TaskFailedReason with Product with Serializable

    Permalink

    :: DeveloperApi :: The task finished successfully, but the result was lost from the executor's block manager before it was fetched.

    :: DeveloperApi :: The task finished successfully, but the result was lost from the executor's block manager before it was fetched.

    Annotations
    @DeveloperApi()
  16. object UnknownReason extends TaskFailedReason with Product with Serializable

    Permalink

    :: DeveloperApi :: We don't know why the task ended -- for example, because of a ClassNotFound exception when deserializing the task result.

    :: DeveloperApi :: We don't know why the task ended -- for example, because of a ClassNotFound exception when deserializing the task result.

    Annotations
    @DeveloperApi()
  17. object WritableConverter extends Serializable

    Permalink
  18. object WritableFactory extends Serializable

    Permalink
  19. package api

    Permalink
  20. package broadcast

    Permalink

    Spark's broadcast variables, used to broadcast immutable datasets to all nodes.

  21. package graphx

    Permalink

    ALPHA COMPONENT GraphX is a graph processing framework built on top of Spark.

    ALPHA COMPONENT GraphX is a graph processing framework built on top of Spark.

  22. package input

    Permalink
  23. package internal

    Permalink
  24. package io

    Permalink

    IO codecs used for compression.

    IO codecs used for compression. See org.apache.spark.io.CompressionCodec.

  25. package launcher

    Permalink
  26. package mapred

    Permalink
  27. package metrics

    Permalink
  28. package ml

    Permalink

    DataFrame-based machine learning APIs to let users quickly assemble and configure practical machine learning pipelines.

  29. package mllib

    Permalink

    RDD-based machine learning APIs (in maintenance mode).

    RDD-based machine learning APIs (in maintenance mode).

    The spark.mllib package is in maintenance mode as of the Spark 2.0.0 release to encourage migration to the DataFrame-based APIs under the org.apache.spark.ml package. While in maintenance mode,

    • no new features in the RDD-based spark.mllib package will be accepted, unless they block implementing new features in the DataFrame-based spark.ml package;
    • bug fixes in the RDD-based APIs will still be accepted.

    The developers will continue adding more features to the DataFrame-based APIs in the 2.x series to reach feature parity with the RDD-based APIs. And once we reach feature parity, this package will be deprecated.

    See also

    SPARK-4591 to track the progress of feature parity

  30. package partial

    Permalink

    :: Experimental ::

    :: Experimental ::

    Support for approximate results. This provides convenient api and also implementation for approximate calculation.

    See also

    org.apache.spark.rdd.RDD.countApprox

  31. package rdd

    Permalink

    Provides several RDD implementations.

    Provides several RDD implementations. See org.apache.spark.rdd.RDD.

  32. package scheduler

    Permalink

    Spark's scheduling components.

    Spark's scheduling components. This includes the org.apache.spark.scheduler.DAGScheduler and lower level org.apache.spark.scheduler.TaskScheduler.

  33. package security

    Permalink
  34. package serializer

    Permalink

    Pluggable serializers for RDD and shuffle data.

    Pluggable serializers for RDD and shuffle data.

    See also

    org.apache.spark.serializer.Serializer

  35. package sql

    Permalink

    Allows the execution of relational queries, including those expressed in SQL using Spark.

  36. package status

    Permalink
  37. package storage

    Permalink
  38. package streaming

    Permalink

    Spark Streaming functionality.

    Spark Streaming functionality. org.apache.spark.streaming.StreamingContext serves as the main entry point to Spark Streaming, while org.apache.spark.streaming.dstream.DStream is the data type representing a continuous sequence of RDDs, representing a continuous stream of data.

    In addition, org.apache.spark.streaming.dstream.PairDStreamFunctions contains operations available only on DStreams of key-value pairs, such as groupByKey and reduceByKey. These operations are automatically available on any DStream of the right type (e.g. DStream[(Int, Int)] through implicit conversions.

    For the Java API of Spark Streaming, take a look at the org.apache.spark.streaming.api.java.JavaStreamingContext which serves as the entry point, and the org.apache.spark.streaming.api.java.JavaDStream and the org.apache.spark.streaming.api.java.JavaPairDStream which have the DStream functionality.

  39. package util

    Permalink

    Spark utilities.

Deprecated Value Members

  1. object AccumulatorParam extends Serializable

    Permalink
    Annotations
    @deprecated
    Deprecated

    (Since version 2.0.0) use AccumulatorV2

Inherited from AnyRef

Inherited from Any

Ungrouped