public class RRDD<T> extends BaseRRDD<T,byte[]>
Constructor and Description |
---|
RRDD(RDD<T> parent,
byte[] func,
String deserializer,
String serializer,
byte[] packageNames,
Object[] broadcastVars,
scala.reflect.ClassTag<T> evidence$4) |
Modifier and Type | Method and Description |
---|---|
JavaRDD<byte[]> |
asJavaRDD() |
static JavaRDD<byte[]> |
createRDDFromArray(JavaSparkContext jsc,
byte[][] arr)
Create an RRDD given a sequence of byte arrays.
|
static JavaRDD<byte[]> |
createRDDFromFile(JavaSparkContext jsc,
String fileName,
int parallelism)
Create an RRDD given a temporary file name.
|
static JavaSparkContext |
createSparkContext(String master,
String appName,
String sparkHome,
String[] jars,
java.util.Map<Object,Object> sparkEnvirMap,
java.util.Map<Object,Object> sparkExecutorEnvMap) |
compute, getPartitions
aggregate, barrier, cache, cartesian, checkpoint, cleanShuffleDependencies, coalesce, collect, collect, context, count, countApprox, countApproxDistinct, countApproxDistinct, countByValue, countByValueApprox, dependencies, distinct, distinct, doubleRDDToDoubleRDDFunctions, filter, first, flatMap, fold, foreach, foreachPartition, getCheckpointFile, getNumPartitions, getResourceProfile, getStorageLevel, glom, groupBy, groupBy, groupBy, id, intersection, intersection, intersection, isCheckpointed, isEmpty, iterator, keyBy, localCheckpoint, map, mapPartitions, mapPartitionsWithEvaluator, mapPartitionsWithIndex, max, min, name, numericRDDToDoubleRDDFunctions, partitioner, partitions, persist, persist, pipe, pipe, pipe, preferredLocations, randomSplit, rddToAsyncRDDActions, rddToOrderedRDDFunctions, rddToPairRDDFunctions, rddToSequenceFileRDDFunctions, reduce, repartition, sample, saveAsObjectFile, saveAsTextFile, saveAsTextFile, setName, sortBy, sparkContext, subtract, subtract, subtract, take, takeOrdered, takeSample, toDebugString, toJavaRDD, toLocalIterator, top, toString, treeAggregate, treeAggregate, treeReduce, union, unpersist, withResources, zip, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitionsWithEvaluator, zipWithIndex, zipWithUniqueId
$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitialize
public static JavaSparkContext createSparkContext(String master, String appName, String sparkHome, String[] jars, java.util.Map<Object,Object> sparkEnvirMap, java.util.Map<Object,Object> sparkExecutorEnvMap)
public static JavaRDD<byte[]> createRDDFromArray(JavaSparkContext jsc, byte[][] arr)
parallelize
is
called from R.jsc
- (undocumented)arr
- (undocumented)public static JavaRDD<byte[]> createRDDFromFile(JavaSparkContext jsc, String fileName, int parallelism)
fileName
- name of temporary file on driver machineparallelism
- number of slices defaults to 4jsc
- (undocumented)public JavaRDD<byte[]> asJavaRDD()