Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance – ScienceOpen
31
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model's behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model's behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to make predictions about a model's behavior. In this work, we propose anchor-LIME (aLIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear. We compare aLIME to linear LIME with simulated experiments, and demonstrate the flexibility of aLIME with qualitative examples from a variety of domains and tasks.

          Related collections

          Most cited references4

          • Record: found
          • Abstract: not found
          • Article: not found

          Probability Inequalities for Sums of Bounded Random Variables

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Mining high-speed data streams

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              node2vec: scalable feature learning for networks

                Bookmark

                Author and article information

                Journal
                2016-11-17
                Article
                1611.05817
                1e5d269a-1025-460f-9beb-376337f6ade7

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                Presented at NIPS 2016 Workshop on Interpretable Machine Learning in Complex Systems
                stat.ML cs.AI cs.LG

                Machine learning,Artificial intelligence
                Machine learning, Artificial intelligence

                Comments

                Comment on this article