Skip to content

GradientBoostingClassifier

Gradient boosting classification.

Parent type: Classifier

Parameters:

Name Type Description Default
treeCount Int The number of boosting stages to perform. Gradient boosting is fairly robust to over-fitting so a large number usually results in better performance. 100
learningRate Float The larger the value, the more the model is influenced by each additional tree. If the learning rate is too low, the model might underfit. If the learning rate is too high, the model might overfit. 0.1

Examples:

pipeline example {
    val training = Table.fromCsvFile("training.csv").toTabularDataset("target");
    val test = Table.fromCsvFile("test.csv").toTabularDataset("target");
    val classifier = GradientBoostingClassifier(treeCount = 50).fit(training);
    val accuracy = classifier.accuracy(test);
}
Stub code in GradientBoostingClassifier.sdsstub

class GradientBoostingClassifier(
    @PythonName("tree_count") const treeCount: Int = 100,
    @PythonName("learning_rate") const learningRate: Float = 0.1
) sub Classifier where {
    treeCount >= 1,
    learningRate > 0.0
} {
    /**
     * The number of trees (estimators) in the ensemble.
     */
    @PythonName("tree_count") attr treeCount: Int
    /**
     * The learning rate.
     */
    @PythonName("learning_rate") attr learningRate: Float

    /**
     * Create a copy of this classifier and fit it with the given training data.
     *
     * This classifier is not modified.
     *
     * @param trainingSet The training data containing the feature and target vectors.
     *
     * @result fittedClassifier The fitted classifier.
     */
    @Pure
    @Category(DataScienceCategory.ModelingQClassicalClassification)
    fun fit(
        @PythonName("training_set") trainingSet: TabularDataset
    ) -> fittedClassifier: GradientBoostingClassifier
}

isFitted

Whether the model is fitted.

Type: Boolean

learningRate

The learning rate.

Type: Float

treeCount

The number of trees (estimators) in the ensemble.

Type: Int

accuracy

Compute the accuracy of the classifier on the given data.

The accuracy is the proportion of predicted target values that were correct. The higher the accuracy, the better. Results range from 0.0 to 1.0.

Note: The model must be fitted.

Parameters:

Name Type Description Default
validationOrTestSet union<Table, TabularDataset> The validation or test set. -

Results:

Name Type Description
accuracy Float The classifier's accuracy.
Stub code in Classifier.sdsstub

@Pure
@Category(DataScienceCategory.ModelEvaluationQMetric)
fun accuracy(
    @PythonName("validation_or_test_set") validationOrTestSet: union<Table, TabularDataset>
) -> accuracy: Float

f1Score

Compute the classifier's F₁ score on the given data.

The F₁ score is the harmonic mean of precision and recall. The higher the F₁ score, the better the classifier. Results range from 0.0 to 1.0.

Note: The model must be fitted.

Parameters:

Name Type Description Default
validationOrTestSet union<Table, TabularDataset> The validation or test set. -
positiveClass Any The class to be considered positive. All other classes are considered negative. -

Results:

Name Type Description
f1Score Float The classifier's F₁ score.
Stub code in Classifier.sdsstub

@Pure
@PythonName("f1_score")
@Category(DataScienceCategory.ModelEvaluationQMetric)
fun f1Score(
    @PythonName("validation_or_test_set") validationOrTestSet: union<Table, TabularDataset>,
    @PythonName("positive_class") positiveClass: Any
) -> f1Score: Float

fit

Create a copy of this classifier and fit it with the given training data.

This classifier is not modified.

Parameters:

Name Type Description Default
trainingSet TabularDataset The training data containing the feature and target vectors. -

Results:

Name Type Description
fittedClassifier GradientBoostingClassifier The fitted classifier.
Stub code in GradientBoostingClassifier.sdsstub

@Pure
@Category(DataScienceCategory.ModelingQClassicalClassification)
fun fit(
    @PythonName("training_set") trainingSet: TabularDataset
) -> fittedClassifier: GradientBoostingClassifier

getFeatureNames

Return the names of the feature columns.

Note: The model must be fitted.

Results:

Name Type Description
featureNames List<String> The names of the feature columns.
Stub code in SupervisedModel.sdsstub

@Pure
@PythonName("get_feature_names")
fun getFeatureNames() -> featureNames: List<String>

getFeaturesSchema

Return the schema of the feature columns.

Note: The model must be fitted.

Results:

Name Type Description
featureSchema Schema The schema of the feature columns.
Stub code in SupervisedModel.sdsstub

@Pure
@PythonName("get_features_schema")
fun getFeaturesSchema() -> featureSchema: Schema

getTargetName

Return the name of the target column.

Note: The model must be fitted.

Results:

Name Type Description
targetName String The name of the target column.
Stub code in SupervisedModel.sdsstub

@Pure
@PythonName("get_target_name")
fun getTargetName() -> targetName: String

getTargetType

Return the type of the target column.

Note: The model must be fitted.

Results:

Name Type Description
targetType ColumnType The type of the target column.
Stub code in SupervisedModel.sdsstub

@Pure
@PythonName("get_target_type")
fun getTargetType() -> targetType: ColumnType

precision

Compute the classifier's precision on the given data.

The precision is the proportion of positive predictions that were correct. The higher the precision, the better the classifier. Results range from 0.0 to 1.0.

Note: The model must be fitted.

Parameters:

Name Type Description Default
validationOrTestSet union<Table, TabularDataset> The validation or test set. -
positiveClass Any The class to be considered positive. All other classes are considered negative. -

Results:

Name Type Description
precision Float The classifier's precision.
Stub code in Classifier.sdsstub

@Pure
@Category(DataScienceCategory.ModelEvaluationQMetric)
fun precision(
    @PythonName("validation_or_test_set") validationOrTestSet: union<Table, TabularDataset>,
    @PythonName("positive_class") positiveClass: Any
) -> precision: Float

predict

Predict the target values on the given dataset.

Note: The model must be fitted.

Parameters:

Name Type Description Default
dataset union<Table, TabularDataset> The dataset containing at least the features. -

Results:

Name Type Description
prediction TabularDataset The given dataset with an additional column for the predicted target values.
Stub code in SupervisedModel.sdsstub

@Pure
fun predict(
    dataset: union<Table, TabularDataset>
) -> prediction: TabularDataset

recall

Compute the classifier's recall on the given data.

The recall is the proportion of actual positives that were predicted correctly. The higher the recall, the better the classifier. Results range from 0.0 to 1.0.

Note: The model must be fitted.

Parameters:

Name Type Description Default
validationOrTestSet union<Table, TabularDataset> The validation or test set. -
positiveClass Any The class to be considered positive. All other classes are considered negative. -

Results:

Name Type Description
recall Float The classifier's recall.
Stub code in Classifier.sdsstub

@Pure
@Category(DataScienceCategory.ModelEvaluationQMetric)
fun recall(
    @PythonName("validation_or_test_set") validationOrTestSet: union<Table, TabularDataset>,
    @PythonName("positive_class") positiveClass: Any
) -> recall: Float

summarizeMetrics

Summarize the classifier's metrics on the given data.

Note: The model must be fitted.

API Stability

Do not rely on the exact output of this method. In future versions, we may change the displayed metrics without prior notice.

Parameters:

Name Type Description Default
validationOrTestSet union<Table, TabularDataset> The validation or test set. -
positiveClass Any The class to be considered positive. All other classes are considered negative. -

Results:

Name Type Description
metrics Table A table containing the classifier's metrics.
Stub code in Classifier.sdsstub

@Pure
@PythonName("summarize_metrics")
@Category(DataScienceCategory.ModelEvaluationQMetric)
fun summarizeMetrics(
    @PythonName("validation_or_test_set") validationOrTestSet: union<Table, TabularDataset>,
    @PythonName("positive_class") positiveClass: Any
) -> metrics: Table