What does AUC stand for and what is it?

  • Searched high and low and have not been able to find out what AUC, as in related to prediction, stands for or means.

    Check the description of `auc` tag you used: http://stats.stackexchange.com/questions/tagged/auc

    Area Under the Curve (i.e., ROC curve)

    Readers here may also be interested in the following thread: Understanding ROC curve.

    The expression "Searched high and low" is interesting since you can find plenty of excellent definitions/uses for AUC by typing "AUC" or "AUC statistics" into google. Appropriate question of course, but that statement just caught me off guard!

    I did Google AUC but a lot of the top results didn't explicitly state AUC = Area Under Curve. The first Wikipedia page related to it does have it but not until half way down. In retrospect it does seem rather obvious! Thank you all for some really detailed answers

  • Abbreviations

    AUC is used most of the time to mean AUROC, which is a bad practice since as Marc Claesen pointed out AUC is ambiguous (could be any curve) while AUROC is not.


    Interpreting the AUROC

    The AUROC has several equivalent interpretations:

    • The expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative.
    • The expected proportion of positives ranked before a uniformly drawn random negative.
    • The expected true positive rate if the ranking is split just before a uniformly drawn random negative.
    • The expected proportion of negatives ranked after a uniformly drawn random positive.
    • The expected false positive rate if the ranking is split just after a uniformly drawn random positive.

    Going further: How to derive the probabilistic interpretation of the AUROC?


    Computing the AUROC

    Assume we have a probabilistic, binary classifier such as logistic regression.

    Before presenting the ROC curve (= Receiver Operating Characteristic curve), the concept of confusion matrix must be understood. When we make a binary prediction, there can be 4 types of outcomes:

    • We predict 0 while the true class is actually 0: this is called a True Negative, i.e. we correctly predict that the class is negative (0). For example, an antivirus did not detect a harmless file as a virus .
    • We predict 0 while the true class is actually 1: this is called a False Negative, i.e. we incorrectly predict that the class is negative (0). For example, an antivirus failed to detect a virus.
    • We predict 1 while the true class is actually 0: this is called a False Positive, i.e. we incorrectly predict that the class is positive (1). For example, an antivirus considered a harmless file to be a virus.
    • We predict 1 while the true class is actually 1: this is called a True Positive, i.e. we correctly predict that the class is positive (1). For example, an antivirus rightfully detected a virus.

    To get the confusion matrix, we go over all the predictions made by the model, and count how many times each of those 4 types of outcomes occur:

    enter image description here

    In this example of a confusion matrix, among the 50 data points that are classified, 45 are correctly classified and the 5 are misclassified.

    Since to compare two different models it is often more convenient to have a single metric rather than several ones, we compute two metrics from the confusion matrix, which we will later combine into one:

    • True positive rate (TPR), aka. sensitivity, hit rate, and recall, which is defined as $ \frac{TP}{TP+FN}$. Intuitively this metric corresponds to the proportion of positive data points that are correctly considered as positive, with respect to all positive data points. In other words, the higher TPR, the fewer positive data points we will miss.
    • False positive rate (FPR), aka. fall-out, which is defined as $ \frac{FP}{FP+TN}$. Intuitively this metric corresponds to the proportion of negative data points that are mistakenly considered as positive, with respect to all negative data points. In other words, the higher FPR, the more negative data points will be missclassified.

    To combine the FPR and the TPR into one single metric, we first compute the two former metrics with many different threshold (for example $0.00; 0.01, 0.02, \dots, 1.00$) for the logistic regression, then plot them on a single graph, with the FPR values on the abscissa and the TPR values on the ordinate. The resulting curve is called ROC curve, and the metric we consider is the AUC of this curve, which we call AUROC.

    The following figure shows the AUROC graphically:

    enter image description here

    In this figure, the blue area corresponds to the Area Under the curve of the Receiver Operating Characteristic (AUROC). The dashed line in the diagonal we present the ROC curve of a random predictor: it has an AUROC of 0.5. The random predictor is commonly used as a baseline to see whether the model is useful.

    If you want to get some first-hand experience:

    Brilliant explanation. Thank you. One question just to clarify that I understand: am I right in saying that, on this graph, a solid blue square would have ROC curve (AUC=1) and would be a good prediction model? I assume this is theoretically possible.

    @josh Yes, that's right. The AUROC is between 0 and 1, and AUROC = 1 means the prediction model is perfect. In fact, further away the AUROC is from 0.5, the better: if AUROC < 0.5, then you just need to invert the decision your model is making. As a result, if AUROC = 0, that's good news because you just need to invert your model's output to obtain a perfect model.

    the link "several equivalent interpretations" is broken.

    @hxd1011 Stack Exchange should mirror linked pages.

    @FranckDernoncourt Great Post. Thanks a lot!! Quick question- You said AUC less than 0.5 is good too as that means we can invert the model decision?. So are you saying if I have AUC =0.3, then if a model is predicting an instance (x vector) as positive label (1) then I should convert it into negative label (0)?. If yes,then isn't this is contrary to what the model is predicting?.Like model is saying a particular instance is positive label, and we are sayng it's negative class since AUC is less than 0.5 so lets invert the predictions.Isn't this going against what the model is predicting

    I still have troubles understanding the curve... Shouldn't be the abscissa be : 1 - FPR instead of FPR ? What represent each points on this curve ? I count around 50 "steps" on this graph, do they represent the TPR and FPR after each experiment since we had 50 data point in the confusion matrix ?

    Do you have any suggestion or suggested readings on how many samples are needed to generate a statistically robust AUROC? e.g. I have 100 positive and 15 negative cases in the training set, is the AUROC generated after training a binary classifier still useful?

    In AUROC interpretations "The expected false positive rate if the ranking is split just after a uniformly drawn random positive. ", shouldn't this be (1 - FPR)?

    @FranckDernoncourt - In your description of the AUC curve, the number of points plotted can be anything (depending how our resolution for the threshold). In other more commonly seen descriptions (for example: answer by Alexey here: https://stats.stackexchange.com/questions/105501/understanding-roc-curve), the number of points in that curve is the number of testing data points. These two don't seem to align. What am I missing?

    @ryu576 ideally the number of points in the ROC curve is indeed the number of testing samples.

    This is a great answer. But I always wondered what the connection was between the interpretation of AUC $P(score(x^+)>score(x^-))$ and the way we calculate it by taking integral over the curve. Check out my answer below where I connect the two mathematically.

    The last point in _Interpreting the AUROC_ should be "1- the expected FPR if..." instead of "the expected FPR if...", shouldn't it? (In the reference you gave, it is also stated as "1-..." in the slides.) Edit: Just saw that Mudit said the same before in a previously hidden comment.

License under CC-BY-SA with attribution


Content dated before 6/26/2020 9:53 AM