User Question: Calculating Feature Importance

Jorge Torres

MindsDB CTO, Jorge Torres, answers a user's questions question on how to calculate feature importance and represent a model's reasoning.

User question: How do you calculate feature importance (i.e., how do you infer the influence that each feature has on the model’s output) in a way that, undoubtedly, represents the model’s reasoning?

We’re still iterating on this. For the current implementation, the following steps are  important. 


  1. If you have a way to understand—for a specific prediction—how certain the prediction is then what you can do is try to get to that prediction without showing the specific features and do that for every specific feature indefinitely. You’ll then get a measurement of how much your accuracy metric or your degree of certainty changes as you obfuscate each of the features. This will give you a measurement of how important that specific feature is for that prediction. 
  2. You will like to understand when you show a feature or not, how does the predicted variable shift. How much does that contribute to the value (for example, if you’re trying to predict a numerical value, you can determine how much that changes the prediction).
  3. You can then try to iterate a few sigmas one way or another so that you can know how stable the prediction is based on that specific feature. 


The combination of these three things will allow you to understand how important they are as well as how much they contribute. You can do this with any machine learning model. What we understand now from self-aware neural networks is:


  1. If your neural networks are problatistic, in the sense that--as opposed to weights--you have weight distributions, as you train the model. If the variances of these weights reduce, that means that your model is more certain about having found a global minima. If you try to do the same training by obfuscating one of the features and that sigma grows that means that your model is less likely to learn a global minima without that variable and that also gives you, globally, a degree of certainty that specific feature adds to the model itself.
  2. Lastly, we have developed what we call self-aware neural networks, where what you do is to predict both the variable and error. As such, what you can understand that by showing a specific feature or not, how much that predicted error changes and that’ll give you another heuristic of the importance of that variable either globally or for this specific prediction.


Those are the tools we grasp on to build feature importance as well as contributions, or what we call force vector stores, at a specific value.

Author Bio

Jorge Torres is the Co-founder & CTO of MindsDB. He is also a visiting scholar at UC Berkeley researching machine learning automation and explainability. Prior to founding MindsDB, he worked for a number of data-intensive start-ups, most recently working with Aneesh Chopra (the first CTO in the US government) building data systems that analyze billions of patients records and lead to highest savings for millions of patients. He started his work on scaling solutions using machine learning in early 2008 while working as first full time engineer at Couchsurfing where he helped grow the company from a few thousand users to a few million. Jorge had degrees in electrical engineering & computer science, including a masters degree in computer systems (with a focus on applied Machine Learning) from the Australian National University.

Be Part of Our Community.

Join our growing community.