Packages

  • package root
    Definition Classes
    root
  • package org
    Definition Classes
    root
  • package apache
    Definition Classes
    org
  • package spark

    Core Spark functionality.

    Core Spark functionality. org.apache.spark.SparkContext serves as the main entry point to Spark, while org.apache.spark.rdd.RDD is the data type representing a distributed collection, and provides most parallel operations.

    In addition, org.apache.spark.rdd.PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; org.apache.spark.rdd.DoubleRDDFunctions contains operations available only on RDDs of Doubles; and org.apache.spark.rdd.SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. These operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)] through implicit conversions.

    Java programmers should reference the org.apache.spark.api.java package for Spark programming APIs in Java.

    Classes and methods marked with Experimental are user-facing features which have not been officially adopted by the Spark project. These are subject to change or removal in minor releases.

    Classes and methods marked with Developer API are intended for advanced users want to extend Spark through lower level interfaces. These are subject to changes or removal in minor releases.

    Definition Classes
    apache
  • package mapred
    Definition Classes
    spark
  • SparkHadoopMapRedUtil
o

org.apache.spark.mapred

SparkHadoopMapRedUtil

object SparkHadoopMapRedUtil extends Logging

Source
SparkHadoopMapRedUtil.scala
Linear Supertypes
Logging, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. SparkHadoopMapRedUtil
  2. Logging
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Type Members

  1. implicit class LogStringContext extends AnyRef
    Definition Classes
    Logging

Value Members

  1. def commitTask(committer: OutputCommitter, mrTaskContext: TaskAttemptContext, jobId: Int, splitId: Int): Unit

    Commits a task output.

    Commits a task output. Before committing the task output, we need to know whether some other task attempt might be racing to commit the same output partition. Therefore, coordinate with the driver in order to determine whether this attempt can commit (please see SPARK-4879 for details).

    Output commit coordinator is only used when spark.hadoop.outputCommitCoordination.enabled is set to true (which is the default).