Explainability of Bonsai Prediction

Microsoft

Does Bonsai have local explainability ? For example in the scenario of Cartpole, Bonsai continuously predicts action, right or left. Local explainability shows the contribution of input features contribute to a each prediction.

3 Replies

@keonabut we do not have a feature for local explainability today. Are you interested in them to help debug during training, or when using a trained brain? If you have use cases in mind, please add them as a suggestion at https://feedback.azure.com/forums/928846-project-bonsai to help us prioritize. 

 

@VictorShnayder Okay. I found this sentence in "Machine Teaching" of Azure Architecture center.  What does it mean for Bonsai ? How about Global Explainability ? Thanks.

Screen Shot 0003-02-10 at 9.38.47.png

@keonabut The explainability benefits of machine teaching come from decomposing a problem into concepts — skills or strategies that can be learned and applied separately. Given such a decomposition, the brain can output "here's what skill or strategy I'm applying now". This is a complement to local explainability, which would explain the actions chosen by a particular concept.