Skip to content

KNearestNeighborsClassifier

K-nearest-neighbors classification.

Parent type: Classifier

Parameters:

Name Type Description Default
neighborCount Int The number of neighbors to use for interpolation. Has to be greater than 0 (validated in the constructor) and less than or equal to the sample size (validated when calling fit). -

Examples:

pipeline example {
    val training = Table.fromCsvFile("training.csv").toTabularDataset("target");
    val test = Table.fromCsvFile("test.csv").toTabularDataset("target");
    val classifier = KNearestNeighborsClassifier(5).fit(training);
    val accuracy = classifier.accuracy(test);
}
Stub code in KNearestNeighborsClassifier.sdsstub

class KNearestNeighborsClassifier(
    @PythonName("neighbor_count") const neighborCount: Int
) sub Classifier where {
    neighborCount >= 1
} {
    /**
     * The number of neighbors used for interpolation.
     */
    @PythonName("neighbor_count") attr neighborCount: Int

    /**
     * Create a copy of this classifier and fit it with the given training data.
     *
     * This classifier is not modified.
     *
     * @param trainingSet The training data containing the feature and target vectors.
     *
     * @result fittedClassifier The fitted classifier.
     */
    @Pure
    @Category(DataScienceCategory.ModelingQClassicalClassification)
    fun fit(
        @PythonName("training_set") trainingSet: TabularDataset
    ) -> fittedClassifier: KNearestNeighborsClassifier
}

isFitted

Whether the model is fitted.

Type: Boolean

neighborCount

The number of neighbors used for interpolation.

Type: Int

accuracy

Compute the accuracy of the classifier on the given data.

The accuracy is the proportion of predicted target values that were correct. The higher the accuracy, the better. Results range from 0.0 to 1.0.

Note: The model must be fitted.

Parameters:

Name Type Description Default
validationOrTestSet union<Table, TabularDataset> The validation or test set. -

Results:

Name Type Description
accuracy Float The classifier's accuracy.
Stub code in Classifier.sdsstub

@Pure
@Category(DataScienceCategory.ModelEvaluationQMetric)
fun accuracy(
    @PythonName("validation_or_test_set") validationOrTestSet: union<Table, TabularDataset>
) -> accuracy: Float

f1Score

Compute the classifier's F₁ score on the given data.

The F₁ score is the harmonic mean of precision and recall. The higher the F₁ score, the better the classifier. Results range from 0.0 to 1.0.

Note: The model must be fitted.

Parameters:

Name Type Description Default
validationOrTestSet union<Table, TabularDataset> The validation or test set. -
positiveClass Any The class to be considered positive. All other classes are considered negative. -

Results:

Name Type Description
f1Score Float The classifier's F₁ score.
Stub code in Classifier.sdsstub

@Pure
@PythonName("f1_score")
@Category(DataScienceCategory.ModelEvaluationQMetric)
fun f1Score(
    @PythonName("validation_or_test_set") validationOrTestSet: union<Table, TabularDataset>,
    @PythonName("positive_class") positiveClass: Any
) -> f1Score: Float

fit

Create a copy of this classifier and fit it with the given training data.

This classifier is not modified.

Parameters:

Name Type Description Default
trainingSet TabularDataset The training data containing the feature and target vectors. -

Results:

Name Type Description
fittedClassifier KNearestNeighborsClassifier The fitted classifier.
Stub code in KNearestNeighborsClassifier.sdsstub

@Pure
@Category(DataScienceCategory.ModelingQClassicalClassification)
fun fit(
    @PythonName("training_set") trainingSet: TabularDataset
) -> fittedClassifier: KNearestNeighborsClassifier

getFeatureNames

Return the names of the feature columns.

Note: The model must be fitted.

Results:

Name Type Description
featureNames List<String> The names of the feature columns.
Stub code in SupervisedModel.sdsstub

@Pure
@PythonName("get_feature_names")
fun getFeatureNames() -> featureNames: List<String>

getFeaturesSchema

Return the schema of the feature columns.

Note: The model must be fitted.

Results:

Name Type Description
featureSchema Schema The schema of the feature columns.
Stub code in SupervisedModel.sdsstub

@Pure
@PythonName("get_features_schema")
fun getFeaturesSchema() -> featureSchema: Schema

getTargetName

Return the name of the target column.

Note: The model must be fitted.

Results:

Name Type Description
targetName String The name of the target column.
Stub code in SupervisedModel.sdsstub

@Pure
@PythonName("get_target_name")
fun getTargetName() -> targetName: String

getTargetType

Return the type of the target column.

Note: The model must be fitted.

Results:

Name Type Description
targetType ColumnType The type of the target column.
Stub code in SupervisedModel.sdsstub

@Pure
@PythonName("get_target_type")
fun getTargetType() -> targetType: ColumnType

precision

Compute the classifier's precision on the given data.

The precision is the proportion of positive predictions that were correct. The higher the precision, the better the classifier. Results range from 0.0 to 1.0.

Note: The model must be fitted.

Parameters:

Name Type Description Default
validationOrTestSet union<Table, TabularDataset> The validation or test set. -
positiveClass Any The class to be considered positive. All other classes are considered negative. -

Results:

Name Type Description
precision Float The classifier's precision.
Stub code in Classifier.sdsstub

@Pure
@Category(DataScienceCategory.ModelEvaluationQMetric)
fun precision(
    @PythonName("validation_or_test_set") validationOrTestSet: union<Table, TabularDataset>,
    @PythonName("positive_class") positiveClass: Any
) -> precision: Float

predict

Predict the target values on the given dataset.

Note: The model must be fitted.

Parameters:

Name Type Description Default
dataset union<Table, TabularDataset> The dataset containing at least the features. -

Results:

Name Type Description
prediction TabularDataset The given dataset with an additional column for the predicted target values.
Stub code in SupervisedModel.sdsstub

@Pure
fun predict(
    dataset: union<Table, TabularDataset>
) -> prediction: TabularDataset

recall

Compute the classifier's recall on the given data.

The recall is the proportion of actual positives that were predicted correctly. The higher the recall, the better the classifier. Results range from 0.0 to 1.0.

Note: The model must be fitted.

Parameters:

Name Type Description Default
validationOrTestSet union<Table, TabularDataset> The validation or test set. -
positiveClass Any The class to be considered positive. All other classes are considered negative. -

Results:

Name Type Description
recall Float The classifier's recall.
Stub code in Classifier.sdsstub

@Pure
@Category(DataScienceCategory.ModelEvaluationQMetric)
fun recall(
    @PythonName("validation_or_test_set") validationOrTestSet: union<Table, TabularDataset>,
    @PythonName("positive_class") positiveClass: Any
) -> recall: Float

summarizeMetrics

Summarize the classifier's metrics on the given data.

Note: The model must be fitted.

API Stability

Do not rely on the exact output of this method. In future versions, we may change the displayed metrics without prior notice.

Parameters:

Name Type Description Default
validationOrTestSet union<Table, TabularDataset> The validation or test set. -
positiveClass Any The class to be considered positive. All other classes are considered negative. -

Results:

Name Type Description
metrics Table A table containing the classifier's metrics.
Stub code in Classifier.sdsstub

@Pure
@PythonName("summarize_metrics")
@Category(DataScienceCategory.ModelEvaluationQMetric)
fun summarizeMetrics(
    @PythonName("validation_or_test_set") validationOrTestSet: union<Table, TabularDataset>,
    @PythonName("positive_class") positiveClass: Any
) -> metrics: Table