SVM

class pai4sk.SupportVectorMachine(max_iter=1000, regularizer=1.0, device_ids=[], verbose=False, use_gpu=False, class_weight=None, num_threads=1, tol=0.001, return_training_history=None, fit_intercept=False, intercept_scaling=1.0)

Support Vector Machine classifier

This class implements regularized support vector machine using the IBM Snap ML solver. It supports both local and distributed(MPI) methods of the Snap ML solver. It can be used for both binary and multi-class classification problems. For multi-class classification it predicts classes or the decision function for each class in the model. It handles both dense and sparse matrix inputs. Use csr, ndarray, deviceNDArray or SnapML data partition format for both training and prediction. DeviceNDArray input data format is currently not supported for training with MPI implementation. The training uses the dual formulation. We recommend the user to normalize the input values.

Parameters:
  • max_iter (int, default : 1000) – Maximum number of iterations used by the solver to converge.
  • regularizer (float, default : 1.0) – Regularization strength. It must be a positive float. Larger regularization values imply stronger regularization.
  • use_gpu (bool, default : False) – Flag for indicating the hardware platform used for training. If True, the training is performed using the GPU. If False, the training is performed using the CPU.
  • device_ids (array-like of int, default : []) – If use_gpu is True, it indicates the IDs of the GPUs used for training. For single GPU training, set device_ids to the GPU ID to be used for training, e.g., [0]. For multi-GPU training, set device_ids to a list of GPU IDs to be used for training, e.g., [0, 1].
  • class_weight ('balanced' or None, optional) – If set to ‘None’, all classes will have weight 1.
  • verbose (bool, default : False) – If True, it prints the training cost, one per iteration. Warning: this will increase the training time. For performance evaluation, use verbose=False.
  • num_threads (int, default : 1) – The number of threads used for running the training. The value of this parameter should be a multiple of 32 if the training is performed on GPU (use_gpu=True).
  • tol (float, default : 0.001) – The tolerance parameter. Training will finish when maximum change in model coefficients is less than tol.
  • return_training_history (str or None, default : None) – How much information about the training should be collected and returned by the fit function. By default no information is returned (None), but this parameter can be set to “summary”, to obtain summary statistics at the end of training, or “full” to obtain a complete set of statistics for the entire training procedure. Note, enabling either option will result in slower training. return_training_history is not supported for DeviceNDArray input format.
  • fit_intercept (bool, default : False) – Add bias term – note, may affect speed of convergence, especially for sparse datasets.
  • intercept_scaling (float, default : 1.0) – Scaling of bias term. The inclusion of a bias term is implemented by appending an additional feature to the dataset. This feature has a constant value, that can be set using this parameter.
Variables:
  • coef (array-like, shape (n_features,) for binary classification or) – (n_features, n_classes) for multi-class classification. Coefficients of the features in the trained model.
  • support (array-like, shape (n_SV)) – indices of the support vectors. Currently not supported for MPI implementation.
  • n_support (int) – Number of support vectors. Currently not supported for MPI implementation.
decision_function(X, num_threads=0)

Predicts confidence scores.

The confidence score of a sample is the signed distance of that sample to the decision boundary.

Parameters:
  • X (Dataset used for predicting distances to the decision boundary. Supports the following input data-types :) –
    1. Sparse matrix (csr_matrix, csc_matrix) or dense matrix (ndarray)
    2. SnapML data partition of type DensePartition, SparsePartition or ConstantValueSparsePartition
  • num_threads (int, default : 0) – Number of threads used to run inference. By default inference runs with maximum number of available threads.
Returns:

proba – Returns the distance to the decision boundary of the samples in X.

Return type:

array-like, shape = (n_samples,) or (n_sample, n_classes)

fit(X_train, y_train=None)

Fit the model according to the given train dataset.

Parameters:
  • X_train (Train dataset. Supports the following input data-types :) –
    1. Sparse matrix (csr_matrix, csc_matrix) or dense matrix (ndarray)
    2. DeviceNDArray. Not supported for MPI execution.
    3. SnapML data partition of type DensePartition, SparsePartition or ConstantValueSparsePartition
  • y_train (The target corresponding to X_train.) – If X_train is sparse matrix or dense matrix, y_train should be array-like of shape = (n_samples,) In case of deviceNDArray, y_train should be array-like of shape = (n_samples, 1) For binary classification the labels should be {-1, 1} or {0, 1}. If X_train is SnapML data partition type, then y_train is not required (i.e. None).
Returns:

self

Return type:

object

get_params()

Get the values of the model parameters.

Returns:params
Return type:dict
predict(X, num_threads=0)

Class predictions

The returned class estimates.

Parameters:
  • X (Dataset used for predicting class estimates. Supports the following input data-types :) –
    1. Sparse matrix (csr_matrix) or dense matrix (ndarray)
    2. SnapML data partition of type DensePartition, SparsePartition or ConstantValueSparsePartition
  • num_threads (int, default : 0) – Number of threads used to run inference. By default inference runs with maximum number of available threads.
Returns:

proba – Returns the predicted class of the samples in X.

Return type:

array-like, shape = (n_samples,)