Documentation
Classes
HuberRegressor

HuberRegressor

L2-regularized linear regression model that is robust to outliers.

The Huber Regressor optimizes the squared loss for the samples where |(y \- Xw \- c) / sigma| < epsilon and the absolute loss for the samples where |(y \- Xw \- c) / sigma| > epsilon, where the model coefficients w, the intercept c and the scale sigma are parameters to be optimized. The parameter sigma makes sure that if y is scaled up or down by a certain factor, one does not need to rescale epsilon to achieve the same robustness. Note that this does not take into account the fact that the different features of X may be of different scales.

The Huber loss function has the advantage of not being heavily influenced by the outliers while not completely ignoring their effect.

Read more in the User Guide

Python Reference (opens in a new tab)

Constructors

constructor()

Signature

new HuberRegressor(opts?: object): HuberRegressor;

Parameters

NameTypeDescription
opts?object-
opts.alpha?numberStrength of the squared L2 regularization. Note that the penalty is equal to alpha \* ||w||^2. Must be in the range \0, inf). Default Value 0.0001
opts.epsilon?numberThe parameter epsilon controls the number of samples that should be classified as outliers. The smaller the epsilon, the more robust it is to outliers. Epsilon must be in the range \[1, inf). Default Value 1.35
opts.fit_intercept?booleanWhether or not to fit the intercept. This can be set to false if the data is already centered around the origin. Default Value true
opts.max_iter?numberMaximum number of iterations that scipy.optimize.minimize(method="L-BFGS-B") should run for. Default Value 100
opts.tol?numberThe iteration will stop when max{|proj g\_i | i \= 1, ..., n} <= tol where pg_i is the i-th component of the projected gradient. Default Value 0.00001
opts.warm_start?booleanThis is useful if the stored attributes of a previously used model has to be reused. If set to false, then the coefficients will be rewritten for every call to fit. See [the Glossary. Default Value false

Returns

HuberRegressor

Defined in: generated/linear_model/HuberRegressor.ts:27 (opens in a new tab)

Properties

_isDisposed

boolean = false

Defined in: generated/linear_model/HuberRegressor.ts:25 (opens in a new tab)

_isInitialized

boolean = false

Defined in: generated/linear_model/HuberRegressor.ts:24 (opens in a new tab)

_py

PythonBridge

Defined in: generated/linear_model/HuberRegressor.ts:23 (opens in a new tab)

id

string

Defined in: generated/linear_model/HuberRegressor.ts:20 (opens in a new tab)

opts

any

Defined in: generated/linear_model/HuberRegressor.ts:21 (opens in a new tab)

Accessors

coef_

Features got by optimizing the L2-regularized Huber loss.

Signature

coef_(): Promise<any>;

Returns

Promise<any>

Defined in: generated/linear_model/HuberRegressor.ts:277 (opens in a new tab)

feature_names_in_

Names of features seen during fit. Defined only when X has feature names that are all strings.

Signature

feature_names_in_(): Promise<ArrayLike>;

Returns

Promise<ArrayLike>

Defined in: generated/linear_model/HuberRegressor.ts:373 (opens in a new tab)

intercept_

Bias.

Signature

intercept_(): Promise<number>;

Returns

Promise<number>

Defined in: generated/linear_model/HuberRegressor.ts:300 (opens in a new tab)

n_features_in_

Number of features seen during fit.

Signature

n_features_in_(): Promise<number>;

Returns

Promise<number>

Defined in: generated/linear_model/HuberRegressor.ts:348 (opens in a new tab)

n_iter_

Number of iterations that scipy.optimize.minimize(method="L-BFGS-B") has run for.

Signature

n_iter_(): Promise<number>;

Returns

Promise<number>

Defined in: generated/linear_model/HuberRegressor.ts:398 (opens in a new tab)

outliers_

A boolean mask which is set to true where the samples are identified as outliers.

Signature

outliers_(): Promise<any>;

Returns

Promise<any>

Defined in: generated/linear_model/HuberRegressor.ts:423 (opens in a new tab)

py

Signature

py(): PythonBridge;

Returns

PythonBridge

Defined in: generated/linear_model/HuberRegressor.ts:74 (opens in a new tab)

Signature

py(pythonBridge: PythonBridge): void;

Parameters

NameType
pythonBridgePythonBridge

Returns

void

Defined in: generated/linear_model/HuberRegressor.ts:78 (opens in a new tab)

scale_

The value by which |y \- Xw \- c| is scaled down.

Signature

scale_(): Promise<number>;

Returns

Promise<number>

Defined in: generated/linear_model/HuberRegressor.ts:325 (opens in a new tab)

Methods

dispose()

Disposes of the underlying Python resources.

Once dispose() is called, the instance is no longer usable.

Signature

dispose(): Promise<void>;

Returns

Promise<void>

Defined in: generated/linear_model/HuberRegressor.ts:133 (opens in a new tab)

fit()

Fit the model according to the given training data.

Signature

fit(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.X?ArrayLikeTraining vector, where n\_samples is the number of samples and n\_features is the number of features.
opts.sample_weight?ArrayLikeWeight given to each sample.
opts.y?ArrayLikeTarget vector relative to X.

Returns

Promise<any>

Defined in: generated/linear_model/HuberRegressor.ts:150 (opens in a new tab)

init()

Initializes the underlying Python resources.

This instance is not usable until the Promise returned by init() resolves.

Signature

init(py: PythonBridge): Promise<void>;

Parameters

NameType
pyPythonBridge

Returns

Promise<void>

Defined in: generated/linear_model/HuberRegressor.ts:87 (opens in a new tab)

predict()

Predict using the linear model.

Signature

predict(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.X?anySamples.

Returns

Promise<any>

Defined in: generated/linear_model/HuberRegressor.ts:195 (opens in a new tab)

score()

Return the coefficient of determination of the prediction.

The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y\_true \- y\_pred)\*\* 2).sum() and \(v\) is the total sum of squares ((y\_true \- y\_true.mean()) \*\* 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.

Signature

score(opts: object): Promise<number>;

Parameters

NameTypeDescription
optsobject-
opts.X?ArrayLike[]Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n\_samples, n\_samples\_fitted), where n\_samples\_fitted is the number of samples used in the fitting for the estimator.
opts.sample_weight?ArrayLikeSample weights.
opts.y?ArrayLikeTrue values for X.

Returns

Promise<number>

Defined in: generated/linear_model/HuberRegressor.ts:230 (opens in a new tab)