Documentation
Classes
LinearSVR

LinearSVR

Linear Support Vector Regression.

Similar to SVR with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples.

This class supports both dense and sparse input.

Read more in the User Guide.

Python Reference (opens in a new tab)

Constructors

constructor()

Signature

new LinearSVR(opts?: object): LinearSVR;

Parameters

NameTypeDescription
opts?object-
opts.C?numberRegularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. Default Value 1
opts.dual?booleanSelect the algorithm to either solve the dual or primal optimization problem. Prefer dual=false when n_samples > n_features. Default Value true
opts.epsilon?numberEpsilon parameter in the epsilon-insensitive loss function. Note that the value of this parameter depends on the scale of the target variable y. If unsure, set epsilon=0. Default Value 0
opts.fit_intercept?booleanWhether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be already centered). Default Value true
opts.intercept_scaling?numberWhen self.fit_intercept is true, instance vector x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. Default Value 1
opts.loss?"epsilon_insensitive" | "squared_epsilon_insensitive"Specifies the loss function. The epsilon-insensitive loss (standard SVR) is the L1 loss, while the squared epsilon-insensitive loss (‘squared_epsilon_insensitive’) is the L2 loss. Default Value 'epsilon_insensitive'
opts.max_iter?numberThe maximum number of iterations to be run. Default Value 1000
opts.random_state?numberControls the pseudo random number generation for shuffling the data. Pass an int for reproducible output across multiple function calls. See Glossary.
opts.tol?numberTolerance for stopping criteria. Default Value 0.0001
opts.verbose?numberEnable verbose output. Note that this setting takes advantage of a per-process runtime setting in liblinear that, if enabled, may not work properly in a multithreaded context. Default Value 0

Returns

LinearSVR

Defined in: generated/svm/LinearSVR.ts:27 (opens in a new tab)

Properties

_isDisposed

boolean = false

Defined in: generated/svm/LinearSVR.ts:25 (opens in a new tab)

_isInitialized

boolean = false

Defined in: generated/svm/LinearSVR.ts:24 (opens in a new tab)

_py

PythonBridge

Defined in: generated/svm/LinearSVR.ts:23 (opens in a new tab)

id

string

Defined in: generated/svm/LinearSVR.ts:20 (opens in a new tab)

opts

any

Defined in: generated/svm/LinearSVR.ts:21 (opens in a new tab)

Accessors

coef_

Weights assigned to the features (coefficients in the primal problem).

coef\_ is a readonly property derived from raw\_coef\_ that follows the internal memory layout of liblinear.

Signature

coef_(): Promise<ArrayLike[]>;

Returns

Promise<ArrayLike[]>

Defined in: generated/svm/LinearSVR.ts:308 (opens in a new tab)

feature_names_in_

Names of features seen during fit. Defined only when X has feature names that are all strings.

Signature

feature_names_in_(): Promise<ArrayLike>;

Returns

Promise<ArrayLike>

Defined in: generated/svm/LinearSVR.ts:379 (opens in a new tab)

intercept_

Constants in decision function.

Signature

intercept_(): Promise<ArrayLike>;

Returns

Promise<ArrayLike>

Defined in: generated/svm/LinearSVR.ts:331 (opens in a new tab)

n_features_in_

Number of features seen during fit.

Signature

n_features_in_(): Promise<number>;

Returns

Promise<number>

Defined in: generated/svm/LinearSVR.ts:354 (opens in a new tab)

n_iter_

Maximum number of iterations run across all classes.

Signature

n_iter_(): Promise<number>;

Returns

Promise<number>

Defined in: generated/svm/LinearSVR.ts:404 (opens in a new tab)

py

Signature

py(): PythonBridge;

Returns

PythonBridge

Defined in: generated/svm/LinearSVR.ts:100 (opens in a new tab)

Signature

py(pythonBridge: PythonBridge): void;

Parameters

NameType
pythonBridgePythonBridge

Returns

void

Defined in: generated/svm/LinearSVR.ts:104 (opens in a new tab)

Methods

dispose()

Disposes of the underlying Python resources.

Once dispose() is called, the instance is no longer usable.

Signature

dispose(): Promise<void>;

Returns

Promise<void>

Defined in: generated/svm/LinearSVR.ts:162 (opens in a new tab)

fit()

Fit the model according to the given training data.

Signature

fit(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.X?ArrayLikeTraining vector, where n\_samples is the number of samples and n\_features is the number of features.
opts.sample_weight?ArrayLikeArray of weights that are assigned to individual samples. If not provided, then each sample is given unit weight.
opts.y?ArrayLikeTarget vector relative to X.

Returns

Promise<any>

Defined in: generated/svm/LinearSVR.ts:179 (opens in a new tab)

init()

Initializes the underlying Python resources.

This instance is not usable until the Promise returned by init() resolves.

Signature

init(py: PythonBridge): Promise<void>;

Parameters

NameType
pyPythonBridge

Returns

Promise<void>

Defined in: generated/svm/LinearSVR.ts:113 (opens in a new tab)

predict()

Predict using the linear model.

Signature

predict(opts: object): Promise<any>;

Parameters

NameTypeDescription
optsobject-
opts.X?anySamples.

Returns

Promise<any>

Defined in: generated/svm/LinearSVR.ts:226 (opens in a new tab)

score()

Return the coefficient of determination of the prediction.

The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y\_true \- y\_pred)\*\* 2).sum() and \(v\) is the total sum of squares ((y\_true \- y\_true.mean()) \*\* 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.

Signature

score(opts: object): Promise<number>;

Parameters

NameTypeDescription
optsobject-
opts.X?ArrayLike[]Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n\_samples, n\_samples\_fitted), where n\_samples\_fitted is the number of samples used in the fitting for the estimator.
opts.sample_weight?ArrayLikeSample weights.
opts.y?ArrayLikeTrue values for X.

Returns

Promise<number>

Defined in: generated/svm/LinearSVR.ts:259 (opens in a new tab)