Package org.apache.spark
Class FetchFailed
Object
org.apache.spark.FetchFailed
- All Implemented Interfaces:
Serializable,TaskEndReason,TaskFailedReason,scala.Equals,scala.Product
:: DeveloperApi ::
Task failed to fetch shuffle data from a remote node. Probably means we have lost the remote
executors the task is trying to fetch from, and thus need to rerun the previous stage.
- See Also:
-
Constructor Summary
ConstructorsConstructorDescriptionFetchFailed(BlockManagerId bmAddress, int shuffleId, long mapId, int mapIndex, int reduceId, String message) -
Method Summary
Modifier and TypeMethodDescriptionabstract static Rapply(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6) booleanFetch failures lead to a different failure handling path: (1) we don't abort the stage after 4 task failures, instead we immediately go back to the stage which generated the map output, and regenerate the missing data.longmapId()intmapIndex()message()intreduceId()intError message displayed in the web UI.static StringtoString()Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface scala.Equals
canEqual, equalsMethods inherited from interface scala.Product
productArity, productElement, productElementName, productElementNames, productIterator, productPrefix
-
Constructor Details
-
FetchFailed
public FetchFailed(BlockManagerId bmAddress, int shuffleId, long mapId, int mapIndex, int reduceId, String message)
-
-
Method Details
-
apply
public abstract static R apply(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6) -
toString
-
bmAddress
-
shuffleId
public int shuffleId() -
mapId
public long mapId() -
mapIndex
public int mapIndex() -
reduceId
public int reduceId() -
message
-
toErrorString
Description copied from interface:TaskFailedReasonError message displayed in the web UI.- Specified by:
toErrorStringin interfaceTaskFailedReason
-
countTowardsTaskFailures
public boolean countTowardsTaskFailures()Fetch failures lead to a different failure handling path: (1) we don't abort the stage after 4 task failures, instead we immediately go back to the stage which generated the map output, and regenerate the missing data. (2) we don't count fetch failures from executors excluded due to too many task failures, since presumably its not the fault of the executor where the task ran, but the executor which stored the data. This is especially important because we might rack up a bunch of fetch-failures in rapid succession, on all nodes of the cluster, due to one bad node.- Specified by:
countTowardsTaskFailuresin interfaceTaskFailedReason- Returns:
- (undocumented)
-