Class FeatureHasher

All Implemented Interfaces:
Serializable, org.apache.spark.internal.Logging, Params, HasInputCols, HasNumFeatures, HasOutputCol, DefaultParamsWritable, Identifiable, MLWritable, scala.Serializable

public class FeatureHasher extends Transformer implements HasInputCols, HasOutputCol, HasNumFeatures, DefaultParamsWritable
Feature hashing projects a set of categorical or numerical features into a feature vector of specified dimension (typically substantially smaller than that of the original feature space). This is done using the hashing trick (https://en.wikipedia.org/wiki/Feature_hashing) to map features to indices in the feature vector.

The FeatureHasher transformer operates on multiple columns. Each column may contain either numeric or categorical features. Behavior and handling of column data types is as follows: -Numeric columns: For numeric features, the hash value of the column name is used to map the feature value to its index in the feature vector. By default, numeric features are not treated as categorical (even when they are integers). To treat them as categorical, specify the relevant columns in categoricalCols. -String columns: For categorical features, the hash value of the string "column_name=value" is used to map to the vector index, with an indicator value of 1.0. Thus, categorical features are "one-hot" encoded (similarly to using OneHotEncoder with dropLast=false). -Boolean columns: Boolean values are treated in the same way as string columns. That is, boolean features are represented as "column_name=true" or "column_name=false", with an indicator value of 1.0.

Null (missing) values are ignored (implicitly zero in the resulting feature vector).

The hash function used here is also the MurmurHash 3 used in HashingTF. Since a simple modulo on the hashed value is used to determine the vector index, it is advisable to use a power of two as the numFeatures parameter; otherwise the features will not be mapped evenly to the vector indices.


   val df = Seq(
    (2.0, true, "1", "foo"),
    (3.0, false, "2", "bar")
   ).toDF("real", "bool", "stringNum", "string")

   val hasher = new FeatureHasher()
    .setInputCols("real", "bool", "stringNum", "string")
    .setOutputCol("features")

   hasher.transform(df).show(false)

   +----+-----+---------+------+------------------------------------------------------+
   |real|bool |stringNum|string|features                                              |
   +----+-----+---------+------+------------------------------------------------------+
   |2.0 |true |1        |foo   |(262144,[51871,63643,174475,253195],[1.0,1.0,2.0,1.0])|
   |3.0 |false|2        |bar   |(262144,[6031,80619,140467,174475],[1.0,1.0,1.0,3.0]) |
   +----+-----+---------+------+------------------------------------------------------+
 
See Also:
  • Constructor Details

    • FeatureHasher

      public FeatureHasher(String uid)
    • FeatureHasher

      public FeatureHasher()
  • Method Details

    • load

      public static FeatureHasher load(String path)
    • read

      public static MLReader<T> read()
    • numFeatures

      public final IntParam numFeatures()
      Description copied from interface: HasNumFeatures
      Param for Number of features. Should be greater than 0.
      Specified by:
      numFeatures in interface HasNumFeatures
      Returns:
      (undocumented)
    • outputCol

      public final Param<String> outputCol()
      Description copied from interface: HasOutputCol
      Param for output column name.
      Specified by:
      outputCol in interface HasOutputCol
      Returns:
      (undocumented)
    • inputCols

      public final StringArrayParam inputCols()
      Description copied from interface: HasInputCols
      Param for input column names.
      Specified by:
      inputCols in interface HasInputCols
      Returns:
      (undocumented)
    • uid

      public String uid()
      Description copied from interface: Identifiable
      An immutable unique ID for the object and its derivatives.
      Specified by:
      uid in interface Identifiable
      Returns:
      (undocumented)
    • categoricalCols

      public StringArrayParam categoricalCols()
      Numeric columns to treat as categorical features. By default only string and boolean columns are treated as categorical, so this param can be used to explicitly specify the numerical columns to treat as categorical. Note, the relevant columns should also be set in inputCols, categorical columns not set in inputCols will be listed in a warning.
      Returns:
      (undocumented)
    • setNumFeatures

      public FeatureHasher setNumFeatures(int value)
    • setInputCols

      public FeatureHasher setInputCols(scala.collection.Seq<String> values)
    • setInputCols

      public FeatureHasher setInputCols(String[] value)
    • setOutputCol

      public FeatureHasher setOutputCol(String value)
    • getCategoricalCols

      public String[] getCategoricalCols()
    • setCategoricalCols

      public FeatureHasher setCategoricalCols(String[] value)
    • transform

      public Dataset<Row> transform(Dataset<?> dataset)
      Description copied from class: Transformer
      Transforms the input dataset.
      Specified by:
      transform in class Transformer
      Parameters:
      dataset - (undocumented)
      Returns:
      (undocumented)
    • copy

      public FeatureHasher copy(ParamMap extra)
      Description copied from interface: Params
      Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. See defaultCopy().
      Specified by:
      copy in interface Params
      Specified by:
      copy in class Transformer
      Parameters:
      extra - (undocumented)
      Returns:
      (undocumented)
    • transformSchema

      public StructType transformSchema(StructType schema)
      Description copied from class: PipelineStage
      Check transform validity and derive the output schema from the input schema.

      We check validity for interactions between parameters during transformSchema and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled by Param.validate().

      Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.

      Specified by:
      transformSchema in class PipelineStage
      Parameters:
      schema - (undocumented)
      Returns:
      (undocumented)
    • toString

      public String toString()
      Specified by:
      toString in interface Identifiable
      Overrides:
      toString in class Object