8
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Illuminating the Black Box: Interpreting Deep Neural Network Models for Psychiatric Research

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Psychiatric research is often confronted with complex abstractions and dynamics that are not readily accessible or well-defined to our perception and measurements, making data-driven methods an appealing approach. Deep neural networks (DNNs) are capable of automatically learning abstractions in the data that can be entirely novel and have demonstrated superior performance over classical machine learning models across a range of tasks and, therefore, serve as a promising tool for making new discoveries in psychiatry. A key concern for the wider application of DNNs is their reputation as a “black box” approach—i.e., they are said to lack transparency or interpretability of how input data are transformed to model outputs. In fact, several existing and emerging tools are providing improvements in interpretability. However, most reviews of interpretability for DNNs focus on theoretical and/or engineering perspectives. This article reviews approaches to DNN interpretability issues that may be relevant to their application in psychiatric research and practice. It describes a framework for understanding these methods, reviews the conceptual basis of specific methods and their potential limitations, and discusses prospects for their implementation and future directions.

          Related collections

          Most cited references89

          • Record: found
          • Abstract: found
          • Article: not found

          Deep learning.

          Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Random Forests

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Long Short-Term Memory

              Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Psychiatry
                Front Psychiatry
                Front. Psychiatry
                Frontiers in Psychiatry
                Frontiers Media S.A.
                1664-0640
                29 October 2020
                2020
                : 11
                : 551299
                Affiliations
                [1] 1Psychiatric Neurodevelopmental and Genetics Unit, Department of Psychiatry, Massachusetts General Hospital , Boston, MA, United States
                [2] 2Department of Psychiatry, Harvard Medical School , Boston, MA, United States
                [3] 3The Stanley Center, Broad Institute of Harvard and Massachusetts Institute of Technology (MIT) , Cambridge, MA, United States
                Author notes

                Edited by: Albert Yang, National Yang-Ming University, Taiwan

                Reviewed by: Fengqin Wang, Hubei Normal University, China; Shih-Jen Tsai, Taipei Veterans General Hospital, Taiwan

                *Correspondence: Yi-han Sheu ysheu@ 123456mgh.harvard.edu

                This article was submitted to Computational Psychiatry, a section of the journal Frontiers in Psychiatry

                Article
                10.3389/fpsyt.2020.551299
                7658441
                33192663
                09953da3-eddc-4aad-a08f-120391a0f2e3
                Copyright © 2020 Sheu.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 14 April 2020
                : 22 September 2020
                Page count
                Figures: 2, Tables: 1, Equations: 0, References: 93, Pages: 14, Words: 12050
                Categories
                Psychiatry
                Review

                Clinical Psychology & Psychiatry
                model interpretability,explainable ai,deep learning,deep neural networks,machine learning,psychiatry

                Comments

                Comment on this article