Home » Glossary » Page 3

# Glossary

## Performance Metrics: F1-Score

What is the F1-Score? The F1-Score has many names: F-Score F-Measure Sørensen’s Similarity Coefficient Sørensen–Dice Coefficient Dice Similarity Coefficient (DSC) Dice’s Coincidence Index Hellden’s Mean Accuracy Index The F1-Score is a metric to evaluate the performance of a binary classifier. It is calculated as the harmonic mean of the precision…

## Performance Metrics: False Discovery Rate

What is the False Discovery Rate? Given a positive prediction, the False Discovery Rate (FDR) is the performance metric that tells you the probability that the true value is negative. It is closely related to the False Omission Rate, which is completely analogous. The complement of the False Discovery Rate…

## Performance Metrics: False Negative Rate

What is the False Negative Rate? The False Negative Rate (Miss Rate) is a performance metric that measures the probability that your model will predict negative when the true value is positive. It is closely related to the False Positive Rate, which is completely analogous. The True Positive Rate and the…

## Performance Metrics: False Omission Rate

What is the False Omission Rate? Given a negative prediction, the False Omission Rate (FDR) is the performance metric that tells you the probability that the true value is positive. It is closely related to the False Discovery Rate, which is completely analogous. The complement of the False Omission Rate…

## Performance Metrics: False Positive Rate

What is the False Positive Rate? The False Positive Rate (Fall-out) is a performance metric that measures the probability that your model will predict positive when the true value is negative. It is closely related to the False Negative Rate, which is completely analogous. The False Positive Rate and the True…

## Performance Metrics: Jaccard Index

What is the Jaccard Index? The Jaccard Index (JI) is a performance metric that also has many other applications in a variety of fields and domains. That’s why it has a lot of names: Jaccard Coefficient Jaccard’s Coefficient of Community Jaccard Similarity Coefficient Jaccard–Tanimoto Coefficient Tanimoto Similarity Coefficient Short’s Measure…

## Performance Metrics: Matthews Correlation Coefficient

What is the Matthews Correlation Coefficient? Matthews Correlation Coefficient (MCC) has many names: Phi Coefficient Pearson’s Phi Coefficient Yule Phi Coefficient Contrary to other performance metrics (such as F1-Score), the MCC is regarded as one of the best measures to evaluate class predictions in a binary setting — even if…

## Performance Metrics: Misclassification Rate

What is the Misclassification Rate? The Misclassification Rate is a performance metric that tells you the fraction of the predictions that were wrong, without distinguishing between positive and negative predictions. The Misclassification Rate can be a very misleading metric when the data set is unbalanced (when the prevalence is either very…

## Performance Metrics: Negative Likelihood Ratio

What is the Negative Likelihood Ratio? The Negative Likelihood Ratio (LR-, -LR, likelihood ratio negative or likelihood ratio for negative results) gives the change in odds of the true value being positive when the predicted value is negative. This is expressed as a ratio. It is analogous to the Positive…

## Performance Metrics: Negative Predictive Value

What is the Negative Predictive Value? Given a negative prediction, the Negative Predictive Value (NPV) is the performance metric that tells you the probability that the true value is negative. It is closely related to the Positive Predictive Value, which is completely analogous. The complement of the Negative Predictive Value…