Class ProbabilisticClassificationModel<FeaturesType,M extends ProbabilisticClassificationModel<FeaturesType,M>>
Object
org.apache.spark.ml.PipelineStage
org.apache.spark.ml.Transformer
org.apache.spark.ml.Model<M>
org.apache.spark.ml.PredictionModel<FeaturesType,M>
org.apache.spark.ml.classification.ClassificationModel<FeaturesType,M>
org.apache.spark.ml.classification.ProbabilisticClassificationModel<FeaturesType,M>
- Type Parameters:
FeaturesType
- Type of input features. E.g.,Vector
M
- Concrete Model type
- All Implemented Interfaces:
Serializable
,org.apache.spark.internal.Logging
,ClassifierParams
,ProbabilisticClassifierParams
,Params
,HasFeaturesCol
,HasLabelCol
,HasPredictionCol
,HasProbabilityCol
,HasRawPredictionCol
,HasThresholds
,PredictorParams
,Identifiable
,scala.Serializable
- Direct Known Subclasses:
DecisionTreeClassificationModel
,FMClassificationModel
,GBTClassificationModel
,LogisticRegressionModel
,MultilayerPerceptronClassificationModel
,NaiveBayesModel
,RandomForestClassificationModel
public abstract class ProbabilisticClassificationModel<FeaturesType,M extends ProbabilisticClassificationModel<FeaturesType,M>>
extends ClassificationModel<FeaturesType,M>
implements ProbabilisticClassifierParams
Model produced by a
ProbabilisticClassifier
.
Classes are indexed {0, 1, ..., numClasses - 1}.
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.SparkShellLoggingFilter
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionstatic void
Normalize a vector of raw predictions to be a multinomial probability vector, in place.predictProbability
(FeaturesType features) Predict the probability of each class given the features.Param for Column name for predicted class conditional probabilities.setProbabilityCol
(String value) setThresholds
(double[] value) Param for Thresholds in multi-class classification to adjust the probability of predicting each class.Transforms dataset by reading fromPredictionModel.featuresCol()
, and appending new columns as specified by parameters: - predicted labels asPredictionModel.predictionCol()
of typeDouble
- raw predictions (confidences) asClassificationModel.rawPredictionCol()
of typeVector
- probability of each class asprobabilityCol()
of typeVector
.transformSchema
(StructType schema) Check transform validity and derive the output schema from the input schema.Methods inherited from class org.apache.spark.ml.classification.ClassificationModel
numClasses, predict, predictRaw, rawPredictionCol, setRawPredictionCol, transformImpl
Methods inherited from class org.apache.spark.ml.PredictionModel
featuresCol, labelCol, numFeatures, predictionCol, setFeaturesCol, setPredictionCol
Methods inherited from class org.apache.spark.ml.Transformer
transform, transform, transform
Methods inherited from class org.apache.spark.ml.PipelineStage
params
Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface org.apache.spark.ml.param.shared.HasFeaturesCol
featuresCol, getFeaturesCol
Methods inherited from interface org.apache.spark.ml.param.shared.HasLabelCol
getLabelCol, labelCol
Methods inherited from interface org.apache.spark.ml.param.shared.HasPredictionCol
getPredictionCol, predictionCol
Methods inherited from interface org.apache.spark.ml.param.shared.HasProbabilityCol
getProbabilityCol
Methods inherited from interface org.apache.spark.ml.param.shared.HasRawPredictionCol
getRawPredictionCol, rawPredictionCol
Methods inherited from interface org.apache.spark.ml.param.shared.HasThresholds
getThresholds
Methods inherited from interface org.apache.spark.ml.util.Identifiable
toString, uid
Methods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq
Methods inherited from interface org.apache.spark.ml.param.Params
clear, copy, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
Methods inherited from interface org.apache.spark.ml.classification.ProbabilisticClassifierParams
validateAndTransformSchema
-
Constructor Details
-
ProbabilisticClassificationModel
public ProbabilisticClassificationModel()
-
-
Method Details
-
normalizeToProbabilitiesInPlace
Normalize a vector of raw predictions to be a multinomial probability vector, in place.The input raw predictions should be nonnegative. The output vector sums to 1.
NOTE: This is NOT applicable to all models, only ones which effectively use class instance counts for raw predictions.
- Parameters:
v
- (undocumented)- Throws:
IllegalArgumentException
- if the input vector is all-0 or including negative values
-
thresholds
Description copied from interface:HasThresholds
Param for Thresholds in multi-class classification to adjust the probability of predicting each class. Array must have length equal to the number of classes, with values > 0 excepting that at most one value may be 0. The class with largest value p/t is predicted, where p is the original probability of that class and t is the class's threshold.- Specified by:
thresholds
in interfaceHasThresholds
- Returns:
- (undocumented)
-
probabilityCol
Description copied from interface:HasProbabilityCol
Param for Column name for predicted class conditional probabilities. Note: Not all models output well-calibrated probability estimates! These probabilities should be treated as confidences, not precise probabilities.- Specified by:
probabilityCol
in interfaceHasProbabilityCol
- Returns:
- (undocumented)
-
setProbabilityCol
-
setThresholds
-
transformSchema
Description copied from class:PipelineStage
Check transform validity and derive the output schema from the input schema.We check validity for interactions between parameters during
transformSchema
and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled byParam.validate()
.Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
- Overrides:
transformSchema
in classClassificationModel<FeaturesType,
M extends ProbabilisticClassificationModel<FeaturesType, M>> - Parameters:
schema
- (undocumented)- Returns:
- (undocumented)
-
transform
Transforms dataset by reading fromPredictionModel.featuresCol()
, and appending new columns as specified by parameters: - predicted labels asPredictionModel.predictionCol()
of typeDouble
- raw predictions (confidences) asClassificationModel.rawPredictionCol()
of typeVector
- probability of each class asprobabilityCol()
of typeVector
.- Overrides:
transform
in classClassificationModel<FeaturesType,
M extends ProbabilisticClassificationModel<FeaturesType, M>> - Parameters:
dataset
- input dataset- Returns:
- transformed dataset
-
predictProbability
Predict the probability of each class given the features. These predictions are also called class conditional probabilities.This internal method is used to implement
transform()
and outputprobabilityCol()
.- Parameters:
features
- (undocumented)- Returns:
- Estimated class conditional probabilities
-