Local interpretable model-agnostic explanations (LIME) is a technique to explain the predictions of any machine learning classifier. Here's an introduction to LIME and how it works.
I found this really interesting. I guess this is a similar to SMART for objectives.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.