linear_model.Lasso (uses SnapML)

class pai4sk.linear_model.Lasso(alpha=1.0, fit_intercept=True, normalize=False, precompute=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, positive=False, random_state=None, selection='cyclic', verbose=0, use_gpu=True, device_ids=[], return_training_history=None, privacy=False, eta=0.3, batch_size=100, privacy_epsilon=10, grad_clip=1, num_threads=1)

Linear Model trained with L1 prior as regularizer (aka the Lasso)

The optimization objective for Lasso is:

(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1

Technically the Lasso model is optimizing the same objective function as the Elastic Net with l1_ratio=1.0 (no L2 penalty).

Read more in the User Guide.

For SnapML solver this supports both local and distributed(MPI) method of execution.

Parameters
  • alpha (float, optional) – Constant that multiplies the L1 term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object.

  • fit_intercept (boolean, optional, default True) – Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (e.g. data is expected to be already centered).

  • normalize (boolean, optional, default False) – This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use pai4sk.preprocessing.StandardScaler before calling fit on an estimator with normalize=False.

  • precompute (True | False | array-like, default=False) – Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. For sparse input this option is always True to preserve sparsity.

  • copy_X (boolean, optional, default True) – If True, X will be copied; else, it may be overwritten.

  • max_iter (int, optional) – The maximum number of iterations

  • tol (float, optional) – The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.

  • warm_start (bool, optional) – When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.

  • positive (bool, optional) – When set to True, forces the coefficients to be positive.

  • random_state (int, RandomState instance or None, optional, default None) – The seed of the pseudo random number generator that selects a random feature to update. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. Used when selection == ‘random’.

  • selection (str, default 'cyclic') – If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4.

  • verbose (bool, default : False) – If True, it prints the training cost, one per iteration. Warning: this will increase the training time. For performance evaluation, use verbose=False.

  • use_gpu (bool, default : True) – Flag for indicating the hardware platform used for training. If True, the training is performed using the GPU. If False, the training is performed using the CPU. The value of this parameter is subjected to changed based on the training data unless set explicitly. Applicable only for snapml solver

  • device_ids (array-like of int, default : []) – If use_gpu is True, it indicates the IDs of the GPUs used for training. For single GPU training, set device_ids to the GPU ID to be used for training, e.g., [0]. For multi-GPU training, set device_ids to a list of GPU IDs to be used for training, e.g., [0, 1]. Applicable only for snapml solver

  • num_threads (int, default : 1) – The number of threads used for running the training. The value of this parameter should be a multiple of 32 if the training is performed on GPU (use_gpu=True) (default value for GPU is 256). Applicable only for snapml solver

  • return_training_history (str or None, default : None) – How much information about the training should be collected and returned by the fit function. By default no information is returned (None), but this parameter can be set to “summary”, to obtain summary statistics at the end of training, or “full” to obtain a complete set of statistics for the entire training procedure. Note, enabling either option will result in slower training. Applicable only for snapml solver

  • privacy (bool, default : False) – Train the model using a differentially private algorithm.

  • eta (float, default : 0.3) – Learning rate for the differentially private training algorithm.

  • batch_size (int, default : 100) – Mini-batch size for the differentially private training algorithm.

  • privacy_epsilon (float, default : 10.0) – Target privacy gaurantee. Learned model will be (privacy_epsilon, 0.01)-private.

  • grad_clip (float, default: 1.0) – Gradient clipping parameter for the differentially private training algorithm

Variables
  • coef_ (array, shape (n_features,) | (n_targets, n_features)) – parameter vector (w in the cost function formula)

  • sparse_coef_ (scipy.sparse matrix, shape (n_features, 1) | (n_targets, n_features)) – sparse_coef_ is a readonly property derived from coef_

  • intercept_ (float | array, shape (n_targets,)) – independent term in decision function.

  • n_iter_ (int | array-like, shape (n_targets,)) – number of iterations run by the coordinate descent solver to reach the specified tolerance.

  • training_history_ (dict) – It returns a dictionary with the following keys : ‘epochs’, ‘t_elap_sec’, ‘train_obj’. If ‘return_training_history’ is set to “summary”, ‘epochs’ contains the total number of epochs performed, ‘t_elap_sec’ contains the total time for completing all of those epochs. If ‘return_training_history’ is set to “full”, ‘epochs’ indicates the number of epochs that have elapsed so far, and ‘t_elap_sec’ contains the time to do those epochs. ‘train_obj’ is the training loss. Applicable only for snapml solver.

  • support_ (array-like) – Indices of the features that lie in the support ond contribute to the decision.

  • model_sparsity_ (float) – Fraction of non-zeros in the model parameters.

Examples

>>> from pai4sk import linear_model
>>> clf = linear_model.Lasso(alpha=0.1)
>>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2])
Lasso(alpha=0.1, copy_X=True, fit_intercept=True, max_iter=1000,
   normalize=False, positive=False, precompute=False, random_state=None,
   selection='cyclic', tol=0.0001, warm_start=False)
>>> print(clf.coef_)
[0.85 0.  ]
>>> print(clf.intercept_)  
0.15...

See also

lars_path, lasso_path, LassoLars, LassoCV, LassoLarsCV, pai4sk.decomposition.sparse_encode

Notes

The algorithm used to fit the model is coordinate descent.

To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array.

fit(X, y, check_input=True)

Fit model with coordinate descent.

Parameters
  • X (ndarray or scipy.sparse matrix, (n_samples, n_features)) – Data For SnapML solver it also supports input of types SnapML data partition and DeviceNDArray.

  • y (ndarray, shape (n_samples,) or (n_samples, n_targets)) – Target. Will be cast to X’s dtype if necessary

  • check_input (boolean, (default=True)) – Allow to bypass several input checking. Don’t use this parameter unless you know what you do.

Notes

Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary.

To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.

predict(X, num_threads=0)

Class predictions The returned class estimates. Parameters ———- X : sparse matrix (csr_matrix) or dense matrix (ndarray)

Dataset used for predicting class estimates. For SnapML solver it also supports input of type SnapML data partition.

num_threadsint, default0

Number of threads used to run inference. By default inference runs with maximum number of available threads.

proba: array-like, shape = (n_samples,)

Returns the predicted class of the sample.