Class LibSVMDataSource

Object
org.apache.spark.ml.source.libsvm.LibSVMDataSource

public class LibSVMDataSource extends Object
libsvm package implements Spark SQL data source API for loading LIBSVM data as DataFrame. The loaded DataFrame has two columns: label containing labels stored as doubles and features containing feature vectors stored as Vectors.

To use LIBSVM data source, you need to set "libsvm" as the format in DataFrameReader and optionally specify options, for example:


   // Scala
   val df = spark.read.format("libsvm")
     .option("numFeatures", "780")
     .load("data/mllib/sample_libsvm_data.txt")

   // Java
   Dataset<Row> df = spark.read().format("libsvm")
     .option("numFeatures, "780")
     .load("data/mllib/sample_libsvm_data.txt");
 

LIBSVM data source supports the following options: - "numFeatures": number of features. If unspecified or nonpositive, the number of features will be determined automatically at the cost of one additional pass. This is also useful when the dataset is already split into multiple files and you want to load them separately, because some features may not present in certain files, which leads to inconsistent feature dimensions. - "vectorType": feature vector type, "sparse" (default) or "dense".

See Also:
Note:
This class is public for documentation purpose. Please don't use this class directly. Rather, use the data source API as illustrated above.

  • Constructor Details

    • LibSVMDataSource

      public LibSVMDataSource()