Conduct the two-sided Kolmogorov Smirnov (KS) test for data sampled from a
continuous distribution. By comparing the largest difference between the empirical cumulative
distribution of the sample data and the theoretical distribution we can provide a test for the
the null hypothesis that the sample data comes from that theoretical distribution.
For more information on KS Test:
Implementation note: We seek to implement the KS test with a minimal number of distributed
passes. We sort the RDD, and then perform the following operations on a per-partition basis:
calculate an empirical cumulative distribution value for each observation, and a theoretical
cumulative distribution value. We know the latter to be correct, while the former will be off by
a constant (how large the constant is depends on how many values precede it in other partitions).
However, given that this constant simply shifts the empirical CDF upwards, but doesn't
change its shape, and furthermore, that constant is the same within a given partition, we can
pick 2 values in each partition that can potentially resolve to the largest global distance.
Namely, we pick the minimum distance and the maximum distance. Additionally, we keep track of how
many elements are in each partition. Once these three values have been returned for every
partition, we can collect and operate locally. Locally, we can now adjust each distance by the
appropriate constant (the cumulative sum of number of elements in the prior partitions divided by
the data set size). Finally, we take the maximum absolute value, and this is the statistic.