Lasso
Linear Model trained with L1 prior as regularizer (aka the Lasso).
The optimization objective for Lasso is:
Python Reference (opens in a new tab)
Constructors
constructor()
Signature
new Lasso(opts?: object): Lasso;
Parameters
Name | Type | Description |
---|---|---|
opts? | object | - |
opts.alpha? | number | Constant that multiplies the L1 term, controlling regularization strength. alpha must be a non-negative float i.e. in \[0, inf) . When alpha \= 0 , the objective is equivalent to ordinary least squares, solved by the LinearRegression object. For numerical reasons, using alpha \= 0 with the Lasso object is not advised. Instead, you should use the LinearRegression object. Default Value 1 |
opts.copy_X? | boolean | If true , X will be copied; else, it may be overwritten. Default Value true |
opts.fit_intercept? | boolean | Whether to calculate the intercept for this model. If set to false , no intercept will be used in calculations (i.e. data is expected to be centered). Default Value true |
opts.max_iter? | number | The maximum number of iterations. Default Value 1000 |
opts.positive? | boolean | When set to true , forces the coefficients to be positive. Default Value false |
opts.precompute? | boolean | ArrayLike [] | Whether to use a precomputed Gram matrix to speed up calculations. The Gram matrix can also be passed as argument. For sparse input this option is always false to preserve sparsity. Default Value false |
opts.random_state? | number | The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary. |
opts.selection? | "random" | "cyclic" | If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Default Value 'cyclic' |
opts.tol? | number | The tolerance for the optimization: if the updates are smaller than tol , the optimization code checks the dual gap for optimality and continues until it is smaller than tol , see Notes below. Default Value 0.0001 |
opts.warm_start? | boolean | When set to true , reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Default Value false |
Returns
Defined in: generated/linear_model/Lasso.ts:23 (opens in a new tab)
Properties
_isDisposed
boolean
=false
Defined in: generated/linear_model/Lasso.ts:21 (opens in a new tab)
_isInitialized
boolean
=false
Defined in: generated/linear_model/Lasso.ts:20 (opens in a new tab)
_py
PythonBridge
Defined in: generated/linear_model/Lasso.ts:19 (opens in a new tab)
id
string
Defined in: generated/linear_model/Lasso.ts:16 (opens in a new tab)
opts
any
Defined in: generated/linear_model/Lasso.ts:17 (opens in a new tab)
Accessors
coef_
Parameter vector (w in the cost function formula).
Signature
coef_(): Promise<ArrayLike>;
Returns
Promise
<ArrayLike
>
Defined in: generated/linear_model/Lasso.ts:458 (opens in a new tab)
dual_gap_
Given param alpha, the dual gaps at the end of the optimization, same shape as each observation of y.
Signature
dual_gap_(): Promise<number | ArrayLike>;
Returns
Promise
<number
| ArrayLike
>
Defined in: generated/linear_model/Lasso.ts:480 (opens in a new tab)
feature_names_in_
Names of features seen during fit. Defined only when X
has feature names that are all strings.
Signature
feature_names_in_(): Promise<ArrayLike>;
Returns
Promise
<ArrayLike
>
Defined in: generated/linear_model/Lasso.ts:571 (opens in a new tab)
intercept_
Independent term in decision function.
Signature
intercept_(): Promise<number | ArrayLike>;
Returns
Promise
<number
| ArrayLike
>
Defined in: generated/linear_model/Lasso.ts:503 (opens in a new tab)
n_features_in_
Number of features seen during fit.
Signature
n_features_in_(): Promise<number>;
Returns
Promise
<number
>
Defined in: generated/linear_model/Lasso.ts:548 (opens in a new tab)
n_iter_
Number of iterations run by the coordinate descent solver to reach the specified tolerance.
Signature
n_iter_(): Promise<number>;
Returns
Promise
<number
>
Defined in: generated/linear_model/Lasso.ts:526 (opens in a new tab)
py
Signature
py(): PythonBridge;
Returns
PythonBridge
Defined in: generated/linear_model/Lasso.ts:98 (opens in a new tab)
Signature
py(pythonBridge: PythonBridge): void;
Parameters
Name | Type |
---|---|
pythonBridge | PythonBridge |
Returns
void
Defined in: generated/linear_model/Lasso.ts:102 (opens in a new tab)
Methods
dispose()
Disposes of the underlying Python resources.
Once dispose()
is called, the instance is no longer usable.
Signature
dispose(): Promise<void>;
Returns
Promise
<void
>
Defined in: generated/linear_model/Lasso.ts:160 (opens in a new tab)
fit()
Fit model with coordinate descent.
Signature
fit(opts: object): Promise<any>;
Parameters
Name | Type | Description |
---|---|---|
opts | object | - |
opts.X? | any | Data. |
opts.check_input? | boolean | Allow to bypass several input checking. Don’t use this parameter unless you know what you do. Default Value true |
opts.sample_weight? | number | ArrayLike | Sample weights. Internally, the sample\_weight vector will be rescaled to sum to n\_samples . |
opts.y? | any | Target. Will be cast to X’s dtype if necessary. |
Returns
Promise
<any
>
Defined in: generated/linear_model/Lasso.ts:177 (opens in a new tab)
init()
Initializes the underlying Python resources.
This instance is not usable until the Promise
returned by init()
resolves.
Signature
init(py: PythonBridge): Promise<void>;
Parameters
Name | Type |
---|---|
py | PythonBridge |
Returns
Promise
<void
>
Defined in: generated/linear_model/Lasso.ts:111 (opens in a new tab)
path()
Compute elastic net path with coordinate descent.
The elastic net optimization function varies for mono and multi-outputs.
For mono-output tasks it is:
Signature
path(opts: object): Promise<ArrayLike>;
Parameters
Name | Type | Description |
---|---|---|
opts | object | - |
opts.X? | ArrayLike | Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. |
opts.Xy? | ArrayLike | Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. |
opts.alphas? | ArrayLike | List of alphas where to compute the models. If undefined alphas are set automatically. |
opts.check_input? | boolean | If set to false , the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller. Default Value true |
opts.coef_init? | ArrayLike | The initial values of the coefficients. |
opts.copy_X? | boolean | If true , X will be copied; else, it may be overwritten. Default Value true |
opts.eps? | number | Length of the path. eps=1e-3 means that alpha\_min / alpha\_max \= 1e-3 . Default Value 0.001 |
opts.l1_ratio? | number | Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1\_ratio=1 corresponds to the Lasso. Default Value 0.5 |
opts.n_alphas? | number | Number of alphas along the regularization path. Default Value 100 |
opts.params? | any | Keyword arguments passed to the coordinate descent solver. |
opts.positive? | boolean | If set to true , forces coefficients to be positive. (Only allowed when y.ndim \== 1 ). Default Value false |
opts.precompute? | boolean | ArrayLike [] | "auto" | Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Default Value 'auto' |
opts.return_n_iter? | boolean | Whether to return the number of iterations or not. Default Value false |
opts.verbose? | number | boolean | Amount of verbosity. Default Value false |
opts.y? | any | Target values. |
Returns
Promise
<ArrayLike
>
Defined in: generated/linear_model/Lasso.ts:237 (opens in a new tab)
predict()
Predict using the linear model.
Signature
predict(opts: object): Promise<any>;
Parameters
Name | Type | Description |
---|---|---|
opts | object | - |
opts.X? | any | Samples. |
Returns
Promise
<any
>
Defined in: generated/linear_model/Lasso.ts:378 (opens in a new tab)
score()
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y\_true \- y\_pred)\*\* 2).sum()
and \(v\) is the total sum of squares ((y\_true \- y\_true.mean()) \*\* 2).sum()
. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y
, disregarding the input features, would get a \(R^2\) score of 0.0.
Signature
score(opts: object): Promise<number>;
Parameters
Name | Type | Description |
---|---|---|
opts | object | - |
opts.X? | ArrayLike [] | Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n\_samples, n\_samples\_fitted) , where n\_samples\_fitted is the number of samples used in the fitting for the estimator. |
opts.sample_weight? | ArrayLike | Sample weights. |
opts.y? | ArrayLike | True values for X . |
Returns
Promise
<number
>
Defined in: generated/linear_model/Lasso.ts:411 (opens in a new tab)