Skip to content

ClassificationMetrics

A collection of classification metrics.

Stub code in ClassificationMetrics.sdsstub
class ClassificationMetrics {
    /**
     * Summarize classification metrics on the given data.
     *
     * @param predicted The predicted target values produced by the classifier.
     * @param expected The expected target values.
     * @param positiveClass The class to be considered positive. All other classes are considered negative.
     *
     * @result metrics A table containing the classification metrics.
     */
    @Pure
    static fun summarize(
        predicted: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
        expected: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
        @PythonName("positive_class") positiveClass: Any
    ) -> metrics: Table

    /**
     * Compute the accuracy on the given data.
     *
     * The accuracy is the proportion of predicted target values that were correct. The **higher** the accuracy, the
     * better. Results range from 0.0 to 1.0.
     *
     * @param predicted The predicted target values produced by the classifier.
     * @param expected The expected target values.
     *
     * @result accuracy The calculated accuracy.
     */
    @Pure
    static fun accuracy(
        predicted: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
        expected: union<Column<Any>, TabularDataset, TimeSeriesDataset>
    ) -> accuracy: Float

    /**
     * Compute the F₁ score on the given data.
     *
     * The F₁ score is the harmonic mean of precision and recall. The **higher** the F₁ score, the better the
     * classifier. Results range from 0.0 to 1.0.
     *
     * @param predicted The predicted target values produced by the classifier.
     * @param expected The expected target values.
     * @param positiveClass The class to be considered positive. All other classes are considered negative.
     *
     * @result f1Score The calculated F₁ score.
     */
    @Pure
    @PythonName("f1_score")
    static fun f1Score(
        predicted: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
        expected: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
        @PythonName("positive_class") positiveClass: Any
    ) -> f1Score: Float

    /**
     * Compute the precision on the given data.
     *
     * The precision is the proportion of positive predictions that were correct. The **higher** the precision, the
     * better the classifier. Results range from 0.0 to 1.0.
     *
     * @param predicted The predicted target values produced by the classifier.
     * @param expected The expected target values.
     * @param positiveClass The class to be considered positive. All other classes are considered negative.
     *
     * @result precision The calculated precision.
     */
    @Pure
    static fun precision(
        predicted: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
        expected: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
        @PythonName("positive_class") positiveClass: Any
    ) -> precision: Float

    /**
     * Compute the recall on the given data.
     *
     * The recall is the proportion of actual positives that were predicted correctly. The **higher** the recall, the
     * better the classifier. Results range from 0.0 to 1.0.
     *
     * @param predicted The predicted target values produced by the classifier.
     * @param expected The expected target values.
     * @param positiveClass The class to be considered positive. All other classes are considered negative.
     *
     * @result recall The calculated recall.
     */
    @Pure
    static fun recall(
        predicted: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
        expected: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
        @PythonName("positive_class") positiveClass: Any
    ) -> recall: Float
}

accuracy

Compute the accuracy on the given data.

The accuracy is the proportion of predicted target values that were correct. The higher the accuracy, the better. Results range from 0.0 to 1.0.

Parameters:

Name Type Description Default
predicted union<Column<Any>, TabularDataset, TimeSeriesDataset> The predicted target values produced by the classifier. -
expected union<Column<Any>, TabularDataset, TimeSeriesDataset> The expected target values. -

Results:

Name Type Description
accuracy Float The calculated accuracy.
Stub code in ClassificationMetrics.sdsstub
@Pure
static fun accuracy(
    predicted: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
    expected: union<Column<Any>, TabularDataset, TimeSeriesDataset>
) -> accuracy: Float

f1Score

Compute the F₁ score on the given data.

The F₁ score is the harmonic mean of precision and recall. The higher the F₁ score, the better the classifier. Results range from 0.0 to 1.0.

Parameters:

Name Type Description Default
predicted union<Column<Any>, TabularDataset, TimeSeriesDataset> The predicted target values produced by the classifier. -
expected union<Column<Any>, TabularDataset, TimeSeriesDataset> The expected target values. -
positiveClass Any The class to be considered positive. All other classes are considered negative. -

Results:

Name Type Description
f1Score Float The calculated F₁ score.
Stub code in ClassificationMetrics.sdsstub
@Pure
@PythonName("f1_score")
static fun f1Score(
    predicted: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
    expected: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
    @PythonName("positive_class") positiveClass: Any
) -> f1Score: Float

precision

Compute the precision on the given data.

The precision is the proportion of positive predictions that were correct. The higher the precision, the better the classifier. Results range from 0.0 to 1.0.

Parameters:

Name Type Description Default
predicted union<Column<Any>, TabularDataset, TimeSeriesDataset> The predicted target values produced by the classifier. -
expected union<Column<Any>, TabularDataset, TimeSeriesDataset> The expected target values. -
positiveClass Any The class to be considered positive. All other classes are considered negative. -

Results:

Name Type Description
precision Float The calculated precision.
Stub code in ClassificationMetrics.sdsstub
@Pure
static fun precision(
    predicted: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
    expected: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
    @PythonName("positive_class") positiveClass: Any
) -> precision: Float

recall

Compute the recall on the given data.

The recall is the proportion of actual positives that were predicted correctly. The higher the recall, the better the classifier. Results range from 0.0 to 1.0.

Parameters:

Name Type Description Default
predicted union<Column<Any>, TabularDataset, TimeSeriesDataset> The predicted target values produced by the classifier. -
expected union<Column<Any>, TabularDataset, TimeSeriesDataset> The expected target values. -
positiveClass Any The class to be considered positive. All other classes are considered negative. -

Results:

Name Type Description
recall Float The calculated recall.
Stub code in ClassificationMetrics.sdsstub
@Pure
static fun recall(
    predicted: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
    expected: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
    @PythonName("positive_class") positiveClass: Any
) -> recall: Float

summarize

Summarize classification metrics on the given data.

Parameters:

Name Type Description Default
predicted union<Column<Any>, TabularDataset, TimeSeriesDataset> The predicted target values produced by the classifier. -
expected union<Column<Any>, TabularDataset, TimeSeriesDataset> The expected target values. -
positiveClass Any The class to be considered positive. All other classes are considered negative. -

Results:

Name Type Description
metrics Table A table containing the classification metrics.
Stub code in ClassificationMetrics.sdsstub
@Pure
static fun summarize(
    predicted: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
    expected: union<Column<Any>, TabularDataset, TimeSeriesDataset>,
    @PythonName("positive_class") positiveClass: Any
) -> metrics: Table