User question: How do you calculate feature importance (i.e., how do you infer the influence that each feature has on the model’s output) in a way that, undoubtedly, represents the model’s reasoning?
We’re still iterating on this. For the current implementation, the following steps are important.
The combination of these three things will allow you to understand how important they are as well as how much they contribute. You can do this with any machine learning model. What we understand now from self-aware neural networks is:
Those are the tools we grasp on to build feature importance as well as contributions, or what we call force vector stores, at a specific value.
Jorge Torres is the Co-founder & CTO of MindsDB. He is also a visiting scholar at UC Berkeley researching machine learning automation and explainability. Prior to founding MindsDB, he worked for a number of data-intensive start-ups, most recently working with Aneesh Chopra (the first CTO in the US government) building data systems that analyze billions of patients records and lead to highest savings for millions of patients. He started his work on scaling solutions using machine learning in early 2008 while working as first full time engineer at Couchsurfing where he helped grow the company from a few thousand users to a few million. Jorge had degrees in electrical engineering & computer science, including a masters degree in computer systems (with a focus on applied Machine Learning) from the Australian National University.