Package org.apache.spark.ml.regression
Class AFTSurvivalRegressionModel
Object
org.apache.spark.ml.PipelineStage
org.apache.spark.ml.Transformer
org.apache.spark.ml.Model<M>
org.apache.spark.ml.PredictionModel<FeaturesType,M>
org.apache.spark.ml.regression.RegressionModel<Vector,AFTSurvivalRegressionModel>
org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- All Implemented Interfaces:
Serializable,org.apache.spark.internal.Logging,Params,HasAggregationDepth,HasFeaturesCol,HasFitIntercept,HasLabelCol,HasMaxBlockSizeInMB,HasMaxIter,HasPredictionCol,HasTol,PredictorParams,AFTSurvivalRegressionParams,Identifiable,MLWritable
public class AFTSurvivalRegressionModel
extends RegressionModel<Vector,AFTSurvivalRegressionModel>
implements AFTSurvivalRegressionParams, MLWritable
Model produced by
AFTSurvivalRegression.- See Also:
-
Nested Class Summary
Nested ClassesNested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter -
Method Summary
Modifier and TypeMethodDescriptionfinal IntParamParam for suggested depth for treeAggregate (>= 2).Param for censor column name.Creates a copy of this instance with the same UID and some extra params.final BooleanParamParam for whether to fit an intercept term.doublestatic AFTSurvivalRegressionModelfinal DoubleParamParam for Maximum memory in MB for stacking input data into blocks.final IntParammaxIter()Param for maximum number of iterations (>= 0).intReturns the number of features the model was trained on.doublePredict label for the given features.predictQuantiles(Vector features) final DoubleArrayParamParam for quantile probabilities array.Param for quantiles column name.static MLReader<AFTSurvivalRegressionModel>read()doublescale()setQuantileProbabilities(double[] value) setQuantilesCol(String value) final DoubleParamtol()Param for the convergence tolerance for iterative algorithms (>= 0).toString()Transforms dataset by reading fromPredictionModel.featuresCol(), callingpredict, and storing the predictions as a new columnPredictionModel.predictionCol().transformSchema(StructType schema) Check transform validity and derive the output schema from the input schema.uid()An immutable unique ID for the object and its derivatives.write()Returns anMLWriterinstance for this ML instance.Methods inherited from class org.apache.spark.ml.PredictionModel
featuresCol, labelCol, predictionCol, setFeaturesCol, setPredictionColMethods inherited from class org.apache.spark.ml.Transformer
transform, transform, transformMethods inherited from class org.apache.spark.ml.PipelineStage
paramsMethods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, wait, wait, waitMethods inherited from interface org.apache.spark.ml.regression.AFTSurvivalRegressionParams
getCensorCol, getQuantileProbabilities, getQuantilesCol, hasQuantilesCol, validateAndTransformSchemaMethods inherited from interface org.apache.spark.ml.param.shared.HasAggregationDepth
getAggregationDepthMethods inherited from interface org.apache.spark.ml.param.shared.HasFeaturesCol
featuresCol, getFeaturesColMethods inherited from interface org.apache.spark.ml.param.shared.HasFitIntercept
getFitInterceptMethods inherited from interface org.apache.spark.ml.param.shared.HasLabelCol
getLabelCol, labelColMethods inherited from interface org.apache.spark.ml.param.shared.HasMaxBlockSizeInMB
getMaxBlockSizeInMBMethods inherited from interface org.apache.spark.ml.param.shared.HasMaxIter
getMaxIterMethods inherited from interface org.apache.spark.ml.param.shared.HasPredictionCol
getPredictionCol, predictionColMethods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logBasedOnLevel, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContextMethods inherited from interface org.apache.spark.ml.util.MLWritable
saveMethods inherited from interface org.apache.spark.ml.param.Params
clear, copyValues, defaultCopy, defaultParamMap, estimateMatadataSize, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, paramMap, params, set, set, set, setDefault, setDefault, shouldOwnMethods inherited from interface org.apache.spark.ml.PredictorParams
validateAndTransformSchema
-
Method Details
-
read
-
load
-
censorCol
Description copied from interface:AFTSurvivalRegressionParamsParam for censor column name. The value of this column could be 0 or 1. If the value is 1, it means the event has occurred i.e. uncensored; otherwise censored.- Specified by:
censorColin interfaceAFTSurvivalRegressionParams- Returns:
- (undocumented)
-
quantileProbabilities
Description copied from interface:AFTSurvivalRegressionParamsParam for quantile probabilities array. Values of the quantile probabilities array should be in the range (0, 1) and the array should be non-empty.- Specified by:
quantileProbabilitiesin interfaceAFTSurvivalRegressionParams- Returns:
- (undocumented)
-
quantilesCol
Description copied from interface:AFTSurvivalRegressionParamsParam for quantiles column name. This column will output quantiles of corresponding quantileProbabilities if it is set.- Specified by:
quantilesColin interfaceAFTSurvivalRegressionParams- Returns:
- (undocumented)
-
maxBlockSizeInMB
Description copied from interface:HasMaxBlockSizeInMBParam for Maximum memory in MB for stacking input data into blocks. Data is stacked within partitions. If more than remaining data size in a partition then it is adjusted to the data size. Default 0.0 represents choosing optimal value, depends on specific algorithm. Must be >= 0..- Specified by:
maxBlockSizeInMBin interfaceHasMaxBlockSizeInMB- Returns:
- (undocumented)
-
aggregationDepth
Description copied from interface:HasAggregationDepthParam for suggested depth for treeAggregate (>= 2).- Specified by:
aggregationDepthin interfaceHasAggregationDepth- Returns:
- (undocumented)
-
fitIntercept
Description copied from interface:HasFitInterceptParam for whether to fit an intercept term.- Specified by:
fitInterceptin interfaceHasFitIntercept- Returns:
- (undocumented)
-
tol
Description copied from interface:HasTolParam for the convergence tolerance for iterative algorithms (>= 0). -
maxIter
Description copied from interface:HasMaxIterParam for maximum number of iterations (>= 0).- Specified by:
maxIterin interfaceHasMaxIter- Returns:
- (undocumented)
-
uid
Description copied from interface:IdentifiableAn immutable unique ID for the object and its derivatives.- Specified by:
uidin interfaceIdentifiable- Returns:
- (undocumented)
-
coefficients
-
intercept
public double intercept() -
scale
public double scale() -
numFeatures
public int numFeatures()Description copied from class:PredictionModelReturns the number of features the model was trained on. If unknown, returns -1- Overrides:
numFeaturesin classPredictionModel<Vector,AFTSurvivalRegressionModel>
-
setQuantileProbabilities
-
setQuantilesCol
-
predictQuantiles
-
predict
Description copied from class:PredictionModelPredict label for the given features. This method is used to implementtransform()and outputPredictionModel.predictionCol().- Specified by:
predictin classPredictionModel<Vector,AFTSurvivalRegressionModel> - Parameters:
features- (undocumented)- Returns:
- (undocumented)
-
transform
Description copied from class:PredictionModelTransforms dataset by reading fromPredictionModel.featuresCol(), callingpredict, and storing the predictions as a new columnPredictionModel.predictionCol().- Overrides:
transformin classPredictionModel<Vector,AFTSurvivalRegressionModel> - Parameters:
dataset- input dataset- Returns:
- transformed dataset with
PredictionModel.predictionCol()of typeDouble
-
transformSchema
Description copied from class:PipelineStageCheck transform validity and derive the output schema from the input schema.We check validity for interactions between parameters during
transformSchemaand raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled byParam.validate().Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
- Overrides:
transformSchemain classPredictionModel<Vector,AFTSurvivalRegressionModel> - Parameters:
schema- (undocumented)- Returns:
- (undocumented)
-
copy
Description copied from interface:ParamsCreates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. SeedefaultCopy().- Specified by:
copyin interfaceParams- Specified by:
copyin classModel<AFTSurvivalRegressionModel>- Parameters:
extra- (undocumented)- Returns:
- (undocumented)
-
write
Description copied from interface:MLWritableReturns anMLWriterinstance for this ML instance.- Specified by:
writein interfaceMLWritable- Returns:
- (undocumented)
-
toString
- Specified by:
toStringin interfaceIdentifiable- Overrides:
toStringin classObject
-