From a Confusion Matrix to another Not-so-Confusing matrix.

Ricardo Antonio Rambal Fattori

How can we go from a 'Confusion Matrix' to a Not-So-Confusing Matrix?

If you are new to predictive Machine Learning, the chances are you've heard the term ‘confusion matrix’ or ‘error matrix.' In the-off chance you haven’t, Wikipedia defines a confusion matrix as:

“ … a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one (in unsupervised learning it is usually called a matching matrix). Each row of the matrix represents the instances in a predicted class while each column represents the instances in an actual class (or vice versa).The name stems from the fact that it makes it easy to see if the system is confusing two classes (i.e. commonly mislabeling one as another).”

Another way to think about a confusion matrix is with an example. Let’s assume you’re feeling a little under the weather so you go to the doctor’s office where he prescribes you a blood test to determine whether you have a virus or not.

Before you get the results, they tell you about a small chance of getting a false positive, but what does that mean? Well, let us see that test confusion matrix to understand!

Can you guess where this matrix shows the false positive? (Top right corner)

In other words: there is a small chance of the test telling us you have the virus, but you don’t have it. Now imagine the following case: the test results tell the doctor you don’t have the virus when you actually DO have it. Scary, dangerous stuff…

If you have followed so far, we are halfway there. But we still have a couple more problems regarding confusion matrices in AutoML.

Now you ask, “Hey Richie, what if I am predicting more than two things?” — In the former case 1. Having a virus 2. Not having a virus — well my friend, it gets a little more tricky because you and the model can get more diversely confused.

Let us switch examples, suppose you are trying to predict whether an image is of a cat, a dog, or a raccoon. After training, you get the following confusion matrix:

In this case, the model is pretty good at telling the difference between a dog and a cat, but not so much — it gets confused — at differentiating between a cat and a raccoon. Notice how the confusion matrix explains a lot of the behavior of your model, telling you its strengths and weaknesses. Momentarily visualize this matrix for predicting the whole animal kingdom - It will be all over the place! This makes it difficult to extract insights on how to improve the model and the dataset.

In the redesign of the confusion matrix for the new MindsDB’s Graphical User Interface, we had to take all of this into consideration and look for a more explainable way to render the results.

MindsDB Confusion Matrix - New Features.

Color-coding

By color-coding the confusion matrix, like a heatmap, we can be straightforward on whether to trust a model or not. Essentially, the green spaces will tell you the percentage of times the model was correct at predicting exactly that label and, in contrast, the red spaces will tell you when the model erred and confused for another label.

But that wasn’t enough, believe it or not. After explaining this matrix a lot, we’ve learned that the axis position for the actual/predicted can be confusing, that’s why made this matrix interactive.

Explaining the Graphic.

Our team spent too many meetings discussing results of models with sentences such as: “the model is accurate on predicting a cat but it misjudges it 36% with a raccoon and with a dog, or wait, actually it is 36% raccoon and actually let me check again and…”

Consequently, we decided to make the MindsDB Scout (GUI) tell you exactly what every value means. If you hover on any of the crossings of the model it will verbose whatever the model is doing.

i.e. if 36% of the times the model was supposed to predict “raccoon” it failed and confused it for “cat” the results would look like this:

On the left, an Interactive and wordy explanation for the confusion matrix. On the right, the confusion matrix.


We encourage you to go and try the new MindsDB version and let us know what you think. If you have ideas on how to make this more explainable, let us know.

BONUS: There’s another problem to think about regarding confusion matrices: numerical predictions!


Author Bio

Project manager and Design Thinker with iterative and creative logic. A believer in design and leadership as creative disciplines, well structured and with methodologies, separated from the notion of design as a compendium of techniques and leadership as a talent. Currently making Machine Learning explainable.

Be Part of Our Community.

Join our growing community.