public class RankingMetrics<T>
extends Object
implements org.apache.spark.internal.Logging, scala.Serializable
Java users should use RankingMetrics$.of
to create a RankingMetrics
instance.
param: predictionAndLabels an RDD of (predicted ranking, ground truth set) pair or (predicted ranking, ground truth set, . relevance value of ground truth set). Since 3.4.0, it supports ndcg evaluation with relevance value.
Constructor and Description |
---|
RankingMetrics(RDD<? extends scala.Product> predictionAndLabels,
scala.reflect.ClassTag<T> evidence$1) |
Modifier and Type | Method and Description |
---|---|
double |
meanAveragePrecision() |
double |
meanAveragePrecisionAt(int k)
Returns the mean average precision (MAP) at ranking position k of all the queries.
|
double |
ndcgAt(int k)
Compute the average NDCG value of all the queries, truncated at ranking position k.
|
static <E,T extends Iterable<E>,A extends Iterable<Object>> |
of(JavaRDD<? extends scala.Product> predictionAndLabels)
Creates a
RankingMetrics instance (for Java users). |
double |
precisionAt(int k)
Compute the average precision of all the queries, truncated at ranking position k.
|
double |
recallAt(int k)
Compute the average recall of all the queries, truncated at ranking position k.
|
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitialize
public static <E,T extends Iterable<E>,A extends Iterable<Object>> RankingMetrics<E> of(JavaRDD<? extends scala.Product> predictionAndLabels)
RankingMetrics
instance (for Java users).predictionAndLabels
- a JavaRDD of (predicted ranking, ground truth set) pairs
or (predicted ranking, ground truth set,
relevance value of ground truth set).
Since 3.4.0, it supports ndcg evaluation with relevance value.public double precisionAt(int k)
If for a query, the ranking algorithm returns n (n is less than k) results, the precision value will be computed as #(relevant items retrieved) / k. This formula also applies when the size of the ground truth set is less than k.
If a query has an empty ground truth set, zero will be used as precision together with a log warning.
See the following paper for detail:
IR evaluation methods for retrieving highly relevant documents. K. Jarvelin and J. Kekalainen
k
- the position to compute the truncated precision, must be positivepublic double meanAveragePrecision()
public double meanAveragePrecisionAt(int k)
k
- the position to compute the truncated precision, must be positivepublic double ndcgAt(int k)
If a query has an empty ground truth set, zero will be used as ndcg together with a log warning.
See the following paper for detail:
IR evaluation methods for retrieving highly relevant documents. K. Jarvelin and J. Kekalainen
k
- the position to compute the truncated ndcg, must be positivepublic double recallAt(int k)
If for a query, the ranking algorithm returns n results, the recall value will be computed as #(relevant items retrieved) / #(ground truth set). This formula also applies when the size of the ground truth set is less than k.
If a query has an empty ground truth set, zero will be used as recall together with a log warning.
See the following paper for detail:
IR evaluation methods for retrieving highly relevant documents. K. Jarvelin and J. Kekalainen
k
- the position to compute the truncated recall, must be positive