How do you calculate precision and recall for multiclass classification using confusion matrix?

  • I wonder how to compute precision and recall using a confusion matrix for a multi-class classification problem. Specifically, an observation can only be assigned to its most probable class / label. I would like to compute:

    • Precision = TP / (TP+FP)
    • Recall = TP / (TP+FN)

    for each class, and then compute the micro-averaged F-measure.

    This docx, *Evaluating a classification model – What does precision and recall tell me?*, from Compumine provides a simple introduction to the confusion matrix and the measures derived from it. It helps to create the confusion matrix, precision, recall, specificity and accuracy.

    find the answer here. Very good explanation https://www.youtube.com/watch?v=FAr2GmWNbT0

    The Compumine link is dead.

    For multiclass case, what I understand that along the rows (axis=0) is the recall and along the columns (axis=1) is the precision. https://rxnlp.com/computing-precision-and-recall-for-multi-class-classification-problems/#.XJobF6Qo_IU

  • Dave

    Dave Correct answer

    8 years ago

    In a 2-hypothesis case, the confusion matrix is usually:

           | Declare H1  |  Declare H0 |
    |Is H1 |    TP       |   FN        |
    |Is H0 |    FP       |   TN        |
    

    where I've used something similar to your notation:

    • TP = true positive (declare H1 when, in truth, H1),
    • FN = false negative (declare H0 when, in truth, H1),
    • FP = false positive
    • TN = true negative

    From the raw data, the values in the table would typically be the counts for each occurrence over the test data. From this, you should be able to compute the quantities you need.

    Edit

    The generalization to multi-class problems is to sum over rows / columns of the confusion matrix. Given that the matrix is oriented as above, i.e., that a given row of the matrix corresponds to specific value for the "truth", we have:

    $\text{Precision}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ji}}$

    $\text{Recall}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ij}}$

    That is, precision is the fraction of events where we correctly declared $i$ out of all instances where the algorithm declared $i$. Conversely, recall is the fraction of events where we correctly declared $i$ out of all of the cases where the true of state of the world is $i$.

    In my case, there are 10+ classes, so I guess the FN will mean the total count of declare class H(i), i != 1; and the same is FP?

    Hi, I wonder what the values will be for the Precision and Recall, if TP+FP=0, and TP+FN=0 for some actual class in the confusion matrix.

    The precision for class `i` is undefined if there are no instances where the algorithm declares `i`. The recall for class `i` is undefined if the test set does not include class `i`.

    My final goal is calculate the Macro F Measure, so I need precision and recall values for each class i; so how can I compute the Macro-F measure if the above two cases appear in some class i? In particular, what's the value for Fi, and does class i count as one of the classes M, that the number of elements in M will be count as the denominator of the formula for calculating Macro F measure.

    I tend to you Bayesian logic, so I'd posit an a priori estimate; maybe the some sort of typical behaviour on your other classes. Another, worst case, approach is to derive the precision and recall values for an algorithm that just randomly guesses.

    sry, could you explain your idea more clearly?

    The worst case precision and recall is $1/N_{classes}$ -- this is what you'd get if your algorithm just guessed uniformly.

    I usually use Kappa coefficient for multiclass classification.

    Calculating Precision is similar to calculate mean accuracy of each class for 10 experiments?

License under CC-BY-SA with attribution


Content dated before 6/26/2020 9:53 AM