LogisticRegression

class pai4sk.LogisticRegression(max_iter=1000, regularizer=1.0, device_ids=[], verbose=False, use_gpu=False, class_weight=None, dual=True, num_threads=1, penalty='l2', tol=0.001, return_training_history=None, privacy=False, eta=0.3, batch_size=100, privacy_epsilon=10, grad_clip=1, fit_intercept=False, intercept_scaling=1.0)

Logistic Regression classifier

This class implements regularized logistic regression using the IBM Snap ML solver. It supports both local and distributed(MPI) methods of the Snap ML solver. It can be used for both binary and multi-class classification problems. For multi-class classification it predicts only classes (no probabilities). It handles both dense and sparse matrix inputs. Use csr, csc, ndarray, deviceNDArray or SnapML data partition format for training and csr, ndarray or SnapML data partition format for prediction. DeviceNDArray input data format is currently not supported for training with MPI implementation. We recommend the user to first normalize the input values.

Parameters:
  • max_iter (int, default : 1000) – Maximum number of iterations used by the solver to converge.
  • regularizer (float, default : 1.0) – Regularization strength. It must be a positive float. Larger regularization values imply stronger regularization.
  • use_gpu (bool, default : False) – Flag for indicating the hardware platform used for training. If True, the training is performed using the GPU. If False, the training is performed using the CPU.
  • device_ids (array-like of int, default : []) – If use_gpu is True, it indicates the IDs of the GPUs used for training. For single-GPU training, set device_ids to the GPU ID to be used for training, e.g., [0]. For multi-GPU training, set device_ids to a list of GPU IDs to be used for training, e.g., [0, 1].
  • class_weight ('balanced' or None, optional) – If set to ‘None’, all classes will have weight 1.
  • dual (bool, default : True) – Dual or primal formulation. Recommendation: if n_samples > n_features use dual=True.
  • verbose (bool, default : False) – If True, it prints the training cost, one per iteration. Warning: this will increase the training time. For performance evaluation, use verbose=False.
  • num_threads (int, default : 1) – The number of threads used for running the training. The value of this parameter should be a multiple of 32 if the training is performed on GPU (use_gpu=True).
  • penalty (str, default : "l2") – The regularization / penalty type. Possible values are “l2” for L2 regularization (LogisticRegression) or “l1” for L1 regularization (SparseLogisticRegression). L1 regularization is possible only for the primal optimization problem (dual=False).
  • tol (float, default : 0.001) – The tolerance parameter. Training will finish when maximum change in model coefficients is less than tol.
  • return_training_history (str or None, default : None) – How much information about the training should be collected and returned by the fit function. By default no information is returned (None), but this parameter can be set to “summary”, to obtain summary statistics at the end of training, or “full” to obtain a complete set of statistics for the entire training procedure. Note, enabling either option will result in slower training. return_training_history is not supported for DeviceNDArray input format.
  • privacy (bool, default : False) – Train the model using a differentially private algorithm. Currently not supported for MPI implementation.
  • eta (float, default : 0.3) – Learning rate for the differentially private training algorithm. Currently not supported for MPI implementation.
  • batch_size (int, default : 100) – Mini-batch size for the differentially private training algorithm. Currently not supported for MPI implementation.
  • privacy_epsilon (float, default : 10.0) – Target privacy gaurantee. Learned model will be (privacy_epsilon, 0.01)-private. Currently not supported for MPI implementation.
  • grad_clip (float, default: 1.0) – Gradient clipping parameter for the differentially private training algorithm. Currently not supported for MPI implementation.
  • fit_intercept (bool, default : False) – Add bias term – note, may affect speed of convergence, especially for sparse datasets.
  • intercept_scaling (float, default : 1.0) – Scaling of bias term. The inclusion of a bias term is implemented by appending an additional feature to the dataset. This feature has a constant value, that can be set using this parameter.
Variables:
  • coef (array-like, shape (n_features, 1) for binary classification or) – (n_features, n_classes) for multi-class classification. Coefficients of the features in the trained model.
  • support (array-like) – Indices of the features that contribute to the decision. (only available for L1) Currently not supported for MPI implementation.
  • model_sparsity (float) – fraction of non-zeros in the model parameters. (only available for L1) Currently not supported for MPI implementation.
fit(X_train, y_train=None)

Fit the model according to the given train data.

Parameters:
  • X_train (Train dataset. Supports the following input data-types :) –
    1. Sparse matrix (csr_matrix, csc_matrix) or dense matrix (ndarray)
    2. DeviceNDArray. Not supported for MPI execution.
    3. SnapML data partition of type DensePartition, SparsePartition or ConstantValueSparsePartition
  • y_train (The target corresponding to X_train.) – If X_train is sparse matrix or dense matrix, y_train should be array-like of shape = (n_samples,) In case of deviceNDArray, y_train should be array-like of shape = (n_samples, 1) For binary classification the labels should be {-1, 1} or {0, 1}. If X_train is SnapML data partition type, then y_train is not required (i.e. None).
Returns:

self

Return type:

object

get_params()

Get the values of the model parameters.

Returns:params
Return type:dict
predict(X, num_threads=0)

Class predictions

The returned class estimates.

Parameters:
  • X (Dataset used for predicting class estimates. Supports the following input data-types :) –
    1. Sparse matrix (csr_matrix, csc_matrix) or dense matrix (ndarray)
    2. SnapML data partition of type DensePartition, SparsePartition or ConstantValueSparsePartition
  • num_threads (int, default : 0) – Number of threads used to run inference. By default inference runs with maximum number of available threads.
Returns:

proba – Returns the predicted class of the sample.

Return type:

array-like, shape = (n_samples,)

predict_log_proba(X, num_threads=0)

Log of probability estimates

The returned log-probability estimates for the two classes. Only for binary classification.

Parameters:
  • X (Dataset used for predicting log-probability estimates. Supports the following input data-types :) –
    1. Sparse matrix (csr_matrix, csc_matrix) or dense matrix (ndarray)
    2. SnapML data partition of type DensePartition, SparsePartition or ConstantValueSparsePartition
  • num_threads (int, default : 0) – Number of threads used to run inference. By default inference runs with maximum number of available threads.
Returns:

proba – array-like of shape = (n_samples, 2) Returns the log-probability of the sample to be a positive example for MPI :

array-like of shape = (n_samples,)

Return type:

Returns the log-probability of the sample of each of the two classes for local implementation :

predict_proba(X, num_threads=0)

Probability estimates

The returned probability estimates for the two classes. Only for binary classification.

Parameters:
  • X (Dataset used for predicting probability estimates. Supports the following input data-types :) –
    1. Sparse matrix (csr_matrix, csc_matrix) or dense matrix (ndarray)
    2. SnapML data partition of type DensePartition, SparsePartition or ConstantValueSparsePartition
  • num_threads (int, default : 0) – Number of threads used to run inference. By default inference runs with maximum number of available threads.
Returns:

proba – array-like of shape = (n_samples, 2) Returns the probability of the sample to be a positive example for MPI :

array-like of shape = (n_samples,)

Return type:

Returns the probability of the sample of each of the two classes for local implementation :