Package org.apache.spark.ml.feature
Class VectorAssembler
Object
org.apache.spark.ml.PipelineStage
org.apache.spark.ml.Transformer
org.apache.spark.ml.feature.VectorAssembler
- All Implemented Interfaces:
Serializable
,org.apache.spark.internal.Logging
,Params
,HasHandleInvalid
,HasInputCols
,HasOutputCol
,DefaultParamsWritable
,Identifiable
,MLWritable
,scala.Serializable
public class VectorAssembler
extends Transformer
implements HasInputCols, HasOutputCol, HasHandleInvalid, DefaultParamsWritable
A feature transformer that merges multiple columns into a vector column.
This requires one pass over the entire dataset. In case we need to infer column lengths from the data we require an additional call to the 'first' Dataset method, see 'handleInvalid' parameter.
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.SparkShellLoggingFilter
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionCreates a copy of this instance with the same UID and some extra params.Param for how to handle invalid data (NULL values).final StringArrayParam
Param for input column names.static VectorAssembler
Param for output column name.static MLReader<T>
read()
setHandleInvalid
(String value) setInputCols
(String[] value) setOutputCol
(String value) toString()
Transforms the input dataset.transformSchema
(StructType schema) Check transform validity and derive the output schema from the input schema.uid()
An immutable unique ID for the object and its derivatives.Methods inherited from class org.apache.spark.ml.Transformer
transform, transform, transform
Methods inherited from class org.apache.spark.ml.PipelineStage
params
Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, wait, wait, wait
Methods inherited from interface org.apache.spark.ml.util.DefaultParamsWritable
write
Methods inherited from interface org.apache.spark.ml.param.shared.HasHandleInvalid
getHandleInvalid
Methods inherited from interface org.apache.spark.ml.param.shared.HasInputCols
getInputCols
Methods inherited from interface org.apache.spark.ml.param.shared.HasOutputCol
getOutputCol
Methods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq
Methods inherited from interface org.apache.spark.ml.util.MLWritable
save
Methods inherited from interface org.apache.spark.ml.param.Params
clear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
-
Constructor Details
-
VectorAssembler
-
VectorAssembler
public VectorAssembler()
-
-
Method Details
-
load
-
read
-
outputCol
Description copied from interface:HasOutputCol
Param for output column name.- Specified by:
outputCol
in interfaceHasOutputCol
- Returns:
- (undocumented)
-
inputCols
Description copied from interface:HasInputCols
Param for input column names.- Specified by:
inputCols
in interfaceHasInputCols
- Returns:
- (undocumented)
-
uid
Description copied from interface:Identifiable
An immutable unique ID for the object and its derivatives.- Specified by:
uid
in interfaceIdentifiable
- Returns:
- (undocumented)
-
setInputCols
-
setOutputCol
-
setHandleInvalid
-
handleInvalid
Param for how to handle invalid data (NULL values). Options are 'skip' (filter out rows with invalid data), 'error' (throw an error), or 'keep' (return relevant number of NaN in the output). Column lengths are taken from the size of ML Attribute Group, which can be set usingVectorSizeHint
in a pipeline beforeVectorAssembler
. Column lengths can also be inferred from first rows of the data since it is safe to do so but only in case of 'error' or 'skip'. Default: "error"- Specified by:
handleInvalid
in interfaceHasHandleInvalid
- Returns:
- (undocumented)
-
transform
Description copied from class:Transformer
Transforms the input dataset.- Specified by:
transform
in classTransformer
- Parameters:
dataset
- (undocumented)- Returns:
- (undocumented)
-
transformSchema
Description copied from class:PipelineStage
Check transform validity and derive the output schema from the input schema.We check validity for interactions between parameters during
transformSchema
and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled byParam.validate()
.Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
- Specified by:
transformSchema
in classPipelineStage
- Parameters:
schema
- (undocumented)- Returns:
- (undocumented)
-
copy
Description copied from interface:Params
Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. SeedefaultCopy()
.- Specified by:
copy
in interfaceParams
- Specified by:
copy
in classTransformer
- Parameters:
extra
- (undocumented)- Returns:
- (undocumented)
-
toString
- Specified by:
toString
in interfaceIdentifiable
- Overrides:
toString
in classObject
-