Statistics

class pyspark.mllib.stat.Statistics[source]

Methods

chiSqTest(observed[, expected])

If observed is Vector, conduct Pearson’s chi-squared goodness of fit test of the observed data against the expected distribution, or against the uniform distribution (by default), with each category having an expected frequency of 1 / len(observed).

colStats(rdd)

Computes column-wise summary statistics for the input RDD[Vector].

corr(x[, y, method])

Compute the correlation (matrix) for the input RDD(s) using the specified method.

kolmogorovSmirnovTest(data[, distName])

Performs the Kolmogorov-Smirnov (KS) test for data sampled from a continuous distribution.

Methods Documentation

static chiSqTest(observed: Union[pyspark.mllib.linalg.Matrix, pyspark.rdd.RDD[pyspark.mllib.regression.LabeledPoint], pyspark.mllib.linalg.Vector], expected: Optional[pyspark.mllib.linalg.Vector] = None) → Union[pyspark.mllib.stat.test.ChiSqTestResult, List[pyspark.mllib.stat.test.ChiSqTestResult]][source]

If observed is Vector, conduct Pearson’s chi-squared goodness of fit test of the observed data against the expected distribution, or against the uniform distribution (by default), with each category having an expected frequency of 1 / len(observed).

If observed is matrix, conduct Pearson’s independence test on the input contingency matrix, which cannot contain negative entries or columns or rows that sum up to 0.

If observed is an RDD of LabeledPoint, conduct Pearson’s independence test for every feature against the label across the input RDD. For each feature, the (feature, label) pairs are converted into a contingency matrix for which the chi-squared statistic is computed. All label and feature values must be categorical.

Parameters
observedpyspark.mllib.linalg.Vector or pyspark.mllib.linalg.Matrix

it could be a vector containing the observed categorical counts/relative frequencies, or the contingency matrix (containing either counts or relative frequencies), or an RDD of LabeledPoint containing the labeled dataset with categorical features. Real-valued features will be treated as categorical for each distinct value.

expectedpyspark.mllib.linalg.Vector

Vector containing the expected categorical counts/relative frequencies. expected is rescaled if the expected sum differs from the observed sum.

Returns
pyspark.mllib.stat.ChiSqTestResult

object containing the test statistic, degrees of freedom, p-value, the method used, and the null hypothesis.

Notes

observed cannot contain negative values

Examples

>>> from pyspark.mllib.linalg import Vectors, Matrices
>>> observed = Vectors.dense([4, 6, 5])
>>> pearson = Statistics.chiSqTest(observed)
>>> print(pearson.statistic)
0.4
>>> pearson.degreesOfFreedom
2
>>> print(round(pearson.pValue, 4))
0.8187
>>> pearson.method
'pearson'
>>> pearson.nullHypothesis
'observed follows the same distribution as expected.'
>>> observed = Vectors.dense([21, 38, 43, 80])
>>> expected = Vectors.dense([3, 5, 7, 20])
>>> pearson = Statistics.chiSqTest(observed, expected)
>>> print(round(pearson.pValue, 4))
0.0027
>>> data = [40.0, 24.0, 29.0, 56.0, 32.0, 42.0, 31.0, 10.0, 0.0, 30.0, 15.0, 12.0]
>>> chi = Statistics.chiSqTest(Matrices.dense(3, 4, data))
>>> print(round(chi.statistic, 4))
21.9958
>>> data = [LabeledPoint(0.0, Vectors.dense([0.5, 10.0])),
...         LabeledPoint(0.0, Vectors.dense([1.5, 20.0])),
...         LabeledPoint(1.0, Vectors.dense([1.5, 30.0])),
...         LabeledPoint(0.0, Vectors.dense([3.5, 30.0])),
...         LabeledPoint(0.0, Vectors.dense([3.5, 40.0])),
...         LabeledPoint(1.0, Vectors.dense([3.5, 40.0])),]
>>> rdd = sc.parallelize(data, 4)
>>> chi = Statistics.chiSqTest(rdd)
>>> print(chi[0].statistic)
0.75
>>> print(chi[1].statistic)
1.5
static colStats(rdd: pyspark.rdd.RDD[pyspark.mllib.linalg.Vector]) → pyspark.mllib.stat._statistics.MultivariateStatisticalSummary[source]

Computes column-wise summary statistics for the input RDD[Vector].

Parameters
rddpyspark.RDD

an RDD[Vector] for which column-wise summary statistics are to be computed.

Returns
MultivariateStatisticalSummary

object containing column-wise summary statistics.

Examples

>>> from pyspark.mllib.linalg import Vectors
>>> rdd = sc.parallelize([Vectors.dense([2, 0, 0, -2]),
...                       Vectors.dense([4, 5, 0,  3]),
...                       Vectors.dense([6, 7, 0,  8])])
>>> cStats = Statistics.colStats(rdd)
>>> cStats.mean()
array([ 4.,  4.,  0.,  3.])
>>> cStats.variance()
array([  4.,  13.,   0.,  25.])
>>> cStats.count()
3
>>> cStats.numNonzeros()
array([ 3.,  2.,  0.,  3.])
>>> cStats.max()
array([ 6.,  7.,  0.,  8.])
>>> cStats.min()
array([ 2.,  0.,  0., -2.])
static corr(x: Union[pyspark.rdd.RDD[pyspark.mllib.linalg.Vector], pyspark.rdd.RDD[float]], y: Optional[pyspark.rdd.RDD[float]] = None, method: Optional[CorrMethodType] = None) → Union[float, pyspark.mllib.linalg.Matrix][source]

Compute the correlation (matrix) for the input RDD(s) using the specified method. Methods currently supported: pearson (default), spearman.

If a single RDD of Vectors is passed in, a correlation matrix comparing the columns in the input RDD is returned. Use method to specify the method to be used for single RDD inout. If two RDDs of floats are passed in, a single float is returned.

Parameters
xpyspark.RDD

an RDD of vector for which the correlation matrix is to be computed, or an RDD of float of the same cardinality as y when y is specified.

ypyspark.RDD, optional

an RDD of float of the same cardinality as x.

methodstr, optional

String specifying the method to use for computing correlation. Supported: pearson (default), spearman

Returns
pyspark.mllib.linalg.Matrix

Correlation matrix comparing columns in x.

Examples

>>> x = sc.parallelize([1.0, 0.0, -2.0], 2)
>>> y = sc.parallelize([4.0, 5.0, 3.0], 2)
>>> zeros = sc.parallelize([0.0, 0.0, 0.0], 2)
>>> abs(Statistics.corr(x, y) - 0.6546537) < 1e-7
True
>>> Statistics.corr(x, y) == Statistics.corr(x, y, "pearson")
True
>>> Statistics.corr(x, y, "spearman")
0.5
>>> from math import isnan
>>> isnan(Statistics.corr(x, zeros))
True
>>> from pyspark.mllib.linalg import Vectors
>>> rdd = sc.parallelize([Vectors.dense([1, 0, 0, -2]), Vectors.dense([4, 5, 0, 3]),
...                       Vectors.dense([6, 7, 0,  8]), Vectors.dense([9, 0, 0, 1])])
>>> pearsonCorr = Statistics.corr(rdd)
>>> print(str(pearsonCorr).replace('nan', 'NaN'))
[[ 1.          0.05564149         NaN  0.40047142]
 [ 0.05564149  1.                 NaN  0.91359586]
 [        NaN         NaN  1.                 NaN]
 [ 0.40047142  0.91359586         NaN  1.        ]]
>>> spearmanCorr = Statistics.corr(rdd, method="spearman")
>>> print(str(spearmanCorr).replace('nan', 'NaN'))
[[ 1.          0.10540926         NaN  0.4       ]
 [ 0.10540926  1.                 NaN  0.9486833 ]
 [        NaN         NaN  1.                 NaN]
 [ 0.4         0.9486833          NaN  1.        ]]
>>> try:
...     Statistics.corr(rdd, "spearman")
...     print("Method name as second argument without 'method=' shouldn't be allowed.")
... except TypeError:
...     pass
static kolmogorovSmirnovTest(data: pyspark.rdd.RDD[float], distName: KolmogorovSmirnovTestDistNameType = 'norm', *params: float) → pyspark.mllib.stat.test.KolmogorovSmirnovTestResult[source]

Performs the Kolmogorov-Smirnov (KS) test for data sampled from a continuous distribution. It tests the null hypothesis that the data is generated from a particular distribution.

The given data is sorted and the Empirical Cumulative Distribution Function (ECDF) is calculated which for a given point is the number of points having a CDF value lesser than it divided by the total number of points.

Since the data is sorted, this is a step function that rises by (1 / length of data) for every ordered point.

The KS statistic gives us the maximum distance between the ECDF and the CDF. Intuitively if this statistic is large, the probability that the null hypothesis is true becomes small. For specific details of the implementation, please have a look at the Scala documentation.

Parameters
datapyspark.RDD

RDD, samples from the data

distNamestr, optional

string, currently only “norm” is supported. (Normal distribution) to calculate the theoretical distribution of the data.

params

additional values which need to be provided for a certain distribution. If not provided, the default values are used.

Returns
pyspark.mllib.stat.KolmogorovSmirnovTestResult

object containing the test statistic, degrees of freedom, p-value, the method used, and the null hypothesis.

Examples

>>> kstest = Statistics.kolmogorovSmirnovTest
>>> data = sc.parallelize([-1.0, 0.0, 1.0])
>>> ksmodel = kstest(data, "norm")
>>> print(round(ksmodel.pValue, 3))
1.0
>>> print(round(ksmodel.statistic, 3))
0.175
>>> ksmodel.nullHypothesis
'Sample follows theoretical distribution'
>>> data = sc.parallelize([2.0, 3.0, 4.0])
>>> ksmodel = kstest(data, "norm", 3.0, 1.0)
>>> print(round(ksmodel.pValue, 3))
1.0
>>> print(round(ksmodel.statistic, 3))
0.175