Feb 08 2021 10:57 PM
Does Bonsai have local explainability ? For example in the scenario of Cartpole, Bonsai continuously predicts action, right or left. Local explainability shows the contribution of input features contribute to a each prediction.
Feb 09 2021 08:22 AM
@keonabut we do not have a feature for local explainability today. Are you interested in them to help debug during training, or when using a trained brain? If you have use cases in mind, please add them as a suggestion at https://feedback.azure.com/forums/928846-project-bonsai to help us prioritize.
Feb 09 2021 04:42 PM
@VictorShnayder Okay. I found this sentence in "Machine Teaching" of Azure Architecture center. What does it mean for Bonsai ? How about Global Explainability ? Thanks.
Feb 11 2021 03:33 PM
@keonabut The explainability benefits of machine teaching come from decomposing a problem into concepts — skills or strategies that can be learned and applied separately. Given such a decomposition, the brain can output "here's what skill or strategy I'm applying now". This is a complement to local explainability, which would explain the actions chosen by a particular concept.