So far we have created a confusion matrix from our model’s prediction output, now we need to summarize or get some inference from the confusion matrix. We have multiple ways for this:
1) Accuracy:
It is the ratio of correct predictions done divided by total predictions done by the model.
Accuracy is an important evaluation metric when your Class is balanced. If your class is imbalanced, then Accuracy can’t be a good indicator about the model. You should go for Precision, Recall and F1 Score in that case.
2) Precision:
This metric tells you about the percentage of predicted positives that are correct. It is the ratio of correct predictions done for a positive class divided by the number of positive predictions done.
Example where Precision is important:
Suppose you have built an Email Spam Detection Engine which checks the email and if it is a spam, it moves it to the Spam folder. In this case Precision is very important because we can afford False Negatives, but we can’t afford False Positives.
3) Recall:
This metric tells you about the percentage of positive class which the model has correctly predicted. It is the ratio of correct predictions done for a positive class divided by the number of actual positives.
Example where Recall is important:
Suppose a research organization has created one Covid test kit which is able to identify whether a person is covid positive or not.
In this case, Recall is very very important. It is necessary to capture all the covid positives. If there are a lot of False Positives in the predictions, then it is okay, but it should predict very less False Negatives.
Precision is concerned about the Blue portion and Recall is concerned about the red portion.
We collect cookies and may share with 3rd party vendors for analytics, advertising and to enhance your experience. You can read more about our cookie policy by clicking on the 'Learn More' Button. By Clicking 'Accept', you agree to use our cookie technology.
Our Privacy policy can be found by clicking here