Class BucketedRandomProjectionLSHModel
- All Implemented Interfaces:
Serializable,org.apache.spark.internal.Logging,BucketedRandomProjectionLSHParams,LSHParams,Params,HasInputCol,HasOutputCol,Identifiable,MLWritable
BucketedRandomProjectionLSH, where multiple random vectors are stored. The
vectors are normalized to be unit vectors and each vector is used in a hash function:
h_i(x) = floor(r_i.dot(x) / bucketLength)
where r_i is the i-th random unit vector. The number of buckets will be (max L2 norm of input
vectors) / bucketLength.
param: randMatrix A matrix with each row representing a hash function.
- See Also:
-
Nested Class Summary
Nested ClassesNested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter -
Method Summary
Modifier and TypeMethodDescriptionDataset<?>approxNearestNeighbors(Dataset<?> dataset, Vector key, int numNearestNeighbors) Overloaded method for approxNearestNeighbors.Dataset<?>approxNearestNeighbors(Dataset<?> dataset, Vector key, int numNearestNeighbors, String distCol) Given a large dataset and an item, approximately find at most k items which have the closest distance to the item.Dataset<?>approxSimilarityJoin(Dataset<?> datasetA, Dataset<?> datasetB, double threshold) Overloaded method for approxSimilarityJoin.Dataset<?>approxSimilarityJoin(Dataset<?> datasetA, Dataset<?> datasetB, double threshold, String distCol) Join two datasets to approximately find all pairs of rows whose distance are smaller than the threshold.The length of each hash bucket, a larger bucket lowers the false negative rate.Creates a copy of this instance with the same UID and some extra params.inputCol()Param for input column name.final IntParamParam for the number of hash tables used in LSH OR-amplification.Param for output column name.read()setInputCol(String value) setOutputCol(String value) toString()Transforms the input dataset.transformSchema(StructType schema) Check transform validity and derive the output schema from the input schema.uid()An immutable unique ID for the object and its derivatives.write()Returns anMLWriterinstance for this ML instance.Methods inherited from class org.apache.spark.ml.Transformer
transform, transform, transformMethods inherited from class org.apache.spark.ml.PipelineStage
paramsMethods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, wait, wait, waitMethods inherited from interface org.apache.spark.ml.feature.BucketedRandomProjectionLSHParams
getBucketLengthMethods inherited from interface org.apache.spark.ml.param.shared.HasInputCol
getInputColMethods inherited from interface org.apache.spark.ml.param.shared.HasOutputCol
getOutputColMethods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logBasedOnLevel, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, MDC, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContextMethods inherited from interface org.apache.spark.ml.feature.LSHParams
getNumHashTables, validateAndTransformSchemaMethods inherited from interface org.apache.spark.ml.util.MLWritable
saveMethods inherited from interface org.apache.spark.ml.param.Params
clear, copyValues, defaultCopy, defaultParamMap, estimateMatadataSize, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
-
Method Details
-
read
-
load
-
bucketLength
Description copied from interface:BucketedRandomProjectionLSHParamsThe length of each hash bucket, a larger bucket lowers the false negative rate. The number of buckets will be(max L2 norm of input vectors) / bucketLength.If input vectors are normalized, 1-10 times of pow(numRecords, -1/inputDim) would be a reasonable value
- Specified by:
bucketLengthin interfaceBucketedRandomProjectionLSHParams- Returns:
- (undocumented)
-
uid
Description copied from interface:IdentifiableAn immutable unique ID for the object and its derivatives.- Specified by:
uidin interfaceIdentifiable- Returns:
- (undocumented)
-
setInputCol
-
setOutputCol
-
copy
Description copied from interface:ParamsCreates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. SeedefaultCopy().- Specified by:
copyin interfaceParams- Specified by:
copyin classModel<BucketedRandomProjectionLSHModel>- Parameters:
extra- (undocumented)- Returns:
- (undocumented)
-
write
Description copied from interface:MLWritableReturns anMLWriterinstance for this ML instance.- Specified by:
writein interfaceMLWritable- Returns:
- (undocumented)
-
toString
- Specified by:
toStringin interfaceIdentifiable- Overrides:
toStringin classObject
-
approxNearestNeighbors
public Dataset<?> approxNearestNeighbors(Dataset<?> dataset, Vector key, int numNearestNeighbors, String distCol) Given a large dataset and an item, approximately find at most k items which have the closest distance to the item. If theHasOutputCol.outputCol()is missing, the method will transform the data; if theHasOutputCol.outputCol()exists, it will use theHasOutputCol.outputCol(). This allows caching of the transformed data when necessary.- Parameters:
dataset- The dataset to search for nearest neighbors of the key.key- Feature vector representing the item to search for.numNearestNeighbors- The maximum number of nearest neighbors.distCol- Output column for storing the distance between each result row and the key.- Returns:
- A dataset containing at most k items closest to the key. A column "distCol" is added to show the distance between each row and the key.
- Note:
- This method is experimental and will likely change behavior in the next release.
-
approxNearestNeighbors
Overloaded method for approxNearestNeighbors. Use "distCol" as default distCol.- Parameters:
dataset- (undocumented)key- (undocumented)numNearestNeighbors- (undocumented)- Returns:
- (undocumented)
-
approxSimilarityJoin
public Dataset<?> approxSimilarityJoin(Dataset<?> datasetA, Dataset<?> datasetB, double threshold, String distCol) Join two datasets to approximately find all pairs of rows whose distance are smaller than the threshold. If theHasOutputCol.outputCol()is missing, the method will transform the data; if theHasOutputCol.outputCol()exists, it will use theHasOutputCol.outputCol(). This allows caching of the transformed data when necessary.- Parameters:
datasetA- One of the datasets to join.datasetB- Another dataset to join.threshold- The threshold for the distance of row pairs.distCol- Output column for storing the distance between each pair of rows.- Returns:
- A joined dataset containing pairs of rows. The original rows are in columns "datasetA" and "datasetB", and a column "distCol" is added to show the distance between each pair.
-
approxSimilarityJoin
Overloaded method for approxSimilarityJoin. Use "distCol" as default distCol.- Parameters:
datasetA- (undocumented)datasetB- (undocumented)threshold- (undocumented)- Returns:
- (undocumented)
-
inputCol
Description copied from interface:HasInputColParam for input column name.- Specified by:
inputColin interfaceHasInputCol- Returns:
- (undocumented)
-
numHashTables
Description copied from interface:LSHParamsParam for the number of hash tables used in LSH OR-amplification.LSH OR-amplification can be used to reduce the false negative rate. Higher values for this param lead to a reduced false negative rate, at the expense of added computational complexity.
- Specified by:
numHashTablesin interfaceLSHParams- Returns:
- (undocumented)
-
outputCol
Description copied from interface:HasOutputColParam for output column name.- Specified by:
outputColin interfaceHasOutputCol- Returns:
- (undocumented)
-
transform
Description copied from class:TransformerTransforms the input dataset.- Specified by:
transformin classTransformer- Parameters:
dataset- (undocumented)- Returns:
- (undocumented)
-
transformSchema
Description copied from class:PipelineStageCheck transform validity and derive the output schema from the input schema.We check validity for interactions between parameters during
transformSchemaand raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled byParam.validate().Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
- Specified by:
transformSchemain classPipelineStage- Parameters:
schema- (undocumented)- Returns:
- (undocumented)
-