MindsDB Wiki:
Machine Learning Explained

Explainability

The more sophisticated ML/AI models become the less interpretable they tend to be. One of the critical shortcomings of machine learning is that it is often unclear how the system attained its result. By adding explainability to all predictions MindsDB tries to tackle this flaw. Not only can you ask questions of your data, but you can receive answers that explain themselves. This gives you context that empowers you to trust its conclusions even more.

Explainable AI (XAI) is a term that was initially coined by DARPA (the Defence Advanced Research Project Agency) which was set up as a research initiative to solve this problem. 

What’s more is that AI in its current form is designed to learn on specific domains and to learn from concrete examples of data, narrowed only to the specific problem they are trained to solve, for it still takes the human capacity of abstract thinking to understand the full context of the problem.

Given the narrow scope of understanding that these AI/ML-based systems have, it is natural to argue that if these algorithms are used for making critical decisions concerning someone’s life or society in general, then (it is obvious to me and I hope it is to you too) that we should not get rid of them, but we should not delegate these systems with the full responsibility of making such critical decisions.

To further elaborate on this, I’d suggest you read Michael Jordan (not the basketball player but a well-known engineer from Berkeley) who has put a length of his thoughts in his article called “Artificial Intelligence — The Revolution Hasn’t Happened Yet”. He and I agree on something; We should ask ourselves what are the important developments that ML/AI needs today, to safely augment the human capacity to solve very complicated problems. In my opinion, XAI is certainly one of those important developments.

For More Details

Here you can find more in-depth information about Machine Learning and MindsDB.