Class FeatureHasher
- All Implemented Interfaces:
Serializable,org.apache.spark.internal.Logging,Params,HasInputCols,HasNumFeatures,HasOutputCol,DefaultParamsWritable,Identifiable,MLWritable
The FeatureHasher transformer operates on multiple columns. Each column may contain either
numeric or categorical features. Behavior and handling of column data types is as follows:
-Numeric columns: For numeric features, the hash value of the column name is used to map the
feature value to its index in the feature vector. By default, numeric features
are not treated as categorical (even when they are integers). To treat them
as categorical, specify the relevant columns in categoricalCols.
-String columns: For categorical features, the hash value of the string "column_name=value"
is used to map to the vector index, with an indicator value of 1.0.
Thus, categorical features are "one-hot" encoded
(similarly to using OneHotEncoder with dropLast=false).
-Boolean columns: Boolean values are treated in the same way as string columns. That is,
boolean features are represented as "column_name=true" or "column_name=false",
with an indicator value of 1.0.
Null (missing) values are ignored (implicitly zero in the resulting feature vector).
The hash function used here is also the MurmurHash 3 used in HashingTF. Since a simple modulo
on the hashed value is used to determine the vector index, it is advisable to use a power of two
as the numFeatures parameter; otherwise the features will not be mapped evenly to the vector
indices.
val df = Seq(
(2.0, true, "1", "foo"),
(3.0, false, "2", "bar")
).toDF("real", "bool", "stringNum", "string")
val hasher = new FeatureHasher()
.setInputCols("real", "bool", "stringNum", "string")
.setOutputCol("features")
hasher.transform(df).show(false)
+----+-----+---------+------+------------------------------------------------------+
|real|bool |stringNum|string|features |
+----+-----+---------+------+------------------------------------------------------+
|2.0 |true |1 |foo |(262144,[51871,63643,174475,253195],[1.0,1.0,2.0,1.0])|
|3.0 |false|2 |bar |(262144,[6031,80619,140467,174475],[1.0,1.0,1.0,3.0]) |
+----+-----+---------+------+------------------------------------------------------+
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionNumeric columns to treat as categorical features.Creates a copy of this instance with the same UID and some extra params.String[]final StringArrayParamParam for input column names.static FeatureHasherfinal IntParamParam for Number of features.Param for output column name.static MLReader<T>read()setCategoricalCols(String[] value) setInputCols(String[] value) setInputCols(scala.collection.immutable.Seq<String> values) setNumFeatures(int value) setOutputCol(String value) toString()Transforms the input dataset.transformSchema(StructType schema) Check transform validity and derive the output schema from the input schema.uid()An immutable unique ID for the object and its derivatives.Methods inherited from class org.apache.spark.ml.Transformer
transform, transform, transformMethods inherited from class org.apache.spark.ml.PipelineStage
paramsMethods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, wait, wait, waitMethods inherited from interface org.apache.spark.ml.util.DefaultParamsWritable
writeMethods inherited from interface org.apache.spark.ml.param.shared.HasInputCols
getInputColsMethods inherited from interface org.apache.spark.ml.param.shared.HasNumFeatures
getNumFeaturesMethods inherited from interface org.apache.spark.ml.param.shared.HasOutputCol
getOutputColMethods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logBasedOnLevel, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, MDC, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContextMethods inherited from interface org.apache.spark.ml.util.MLWritable
saveMethods inherited from interface org.apache.spark.ml.param.Params
clear, copyValues, defaultCopy, defaultParamMap, estimateMatadataSize, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
-
Constructor Details
-
FeatureHasher
-
FeatureHasher
public FeatureHasher()
-
-
Method Details
-
load
-
read
-
numFeatures
Description copied from interface:HasNumFeaturesParam for Number of features. Should be greater than 0.- Specified by:
numFeaturesin interfaceHasNumFeatures- Returns:
- (undocumented)
-
outputCol
Description copied from interface:HasOutputColParam for output column name.- Specified by:
outputColin interfaceHasOutputCol- Returns:
- (undocumented)
-
inputCols
Description copied from interface:HasInputColsParam for input column names.- Specified by:
inputColsin interfaceHasInputCols- Returns:
- (undocumented)
-
uid
Description copied from interface:IdentifiableAn immutable unique ID for the object and its derivatives.- Specified by:
uidin interfaceIdentifiable- Returns:
- (undocumented)
-
categoricalCols
Numeric columns to treat as categorical features. By default only string and boolean columns are treated as categorical, so this param can be used to explicitly specify the numerical columns to treat as categorical. Note, the relevant columns should also be set ininputCols, categorical columns not set ininputColswill be listed in a warning.- Returns:
- (undocumented)
-
setNumFeatures
-
setInputCols
-
setInputCols
-
setOutputCol
-
getCategoricalCols
-
setCategoricalCols
-
transform
Description copied from class:TransformerTransforms the input dataset.- Specified by:
transformin classTransformer- Parameters:
dataset- (undocumented)- Returns:
- (undocumented)
-
copy
Description copied from interface:ParamsCreates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. SeedefaultCopy().- Specified by:
copyin interfaceParams- Specified by:
copyin classTransformer- Parameters:
extra- (undocumented)- Returns:
- (undocumented)
-
transformSchema
Description copied from class:PipelineStageCheck transform validity and derive the output schema from the input schema.We check validity for interactions between parameters during
transformSchemaand raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled byParam.validate().Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
- Specified by:
transformSchemain classPipelineStage- Parameters:
schema- (undocumented)- Returns:
- (undocumented)
-
toString
- Specified by:
toStringin interfaceIdentifiable- Overrides:
toStringin classObject
-