top of page
Search
Writer's pictureRosmina Joy Cabauatan

Performance of a Classification Model Using Confusion Matrix

Confusion matrix is used to describe the performance of a classification model. It has two rows and two columns. The two rows represent the actual events, the first when the event happened, and the second row when the event didn't happen. The two columns represent the prediction of the model, the first column when the model predicts the event will happen, and the second column when the model predicts that the event will not happen.




·There are intersections between the rows and columns are interpreted as follows:


  • The first intersection is between the actual one and the predicted one, referred to as True Positive. True means that the predictions come true. Positive means it happened. It is also be defined as a positive class that is predicted as positive.

  • The second intersection is between the actual one and the predicted zero, referred to as False Negative. False means that the prediction is wrong or false and Negative means that the model predicted something not to happen and it happened. It is also defined as a negative class that is predicted as negative.

  • The third intersection is between actual zero and predicted one, referred to as False Positive. False means the prediction is wrong and predicted something to happen and it didn't happen. Positive means a prediction of something to happen and actually didn't happen so that's why the prediction is positive but actually it's negative or it didn't happen. It is also defined as a negative class that is predicted as positive or Type I Error.

  • The fourth intersection, both the prediction and actual are zero, referred to as True Negative. True means the prediction comes true. Negative means the event did not happen. It is also defined as a positive class that is predicted as negative or Type II Error.


 

Accuracy


Accuracy is one metric used to evaluate the performance of the model. It is calculated as:


Accuracy = Number of Correct Predictions / Total Number of Predictions


It is also calculated as:

Accuracy = (TP + TN) / TP + TN + FP + FN


 

Precision


Precision is the ratio of correct positive predictions to the total positive predictions. It answers the following questions:


  • Out of all positives been predicted, how many are actually positive?

  • What proportion of positive identifications was actually correct?


Precision is calculated using the formula,


Precision = True Positive / (True Positive / False Positive)

 

Recall


Recall is a measure of how many positives your model is able to recall from the data. Recall is also known as Sensitivity or TPR (True Positive Rate)It answers the following questions:


  • Out of all positive records, how many records are predicted correctly?

  • What proportion of actual positives was identified correctly?

Recall is calculated using the formula,

Recall = True Positive / (True Positive / False Negative)

 

F1Score


F1 score metric balances precision and recall. It is the harmonic mean of precision and recall.


F1 Score is calculated as:

F1 = (2 * Precision * Recall) / Precision + Recall



 

51 views0 comments

Comments


bottom of page