0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Review of Statistical-Based Fault Detection and Diagnosis with Probabilistic Models

      , , , ,
      Symmetry
      MDPI AG

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          As industrial processes grow increasingly complex, fault identification becomes challenging, and even minor errors can significantly impact both productivity and system safety. Fault detection and diagnosis (FDD) has emerged as a crucial strategy for maintaining system reliability and safety through condition monitoring and abnormality recovery to manage this challenge. Statistical-based FDD methods that rely on large-scale process data and their features have been developed for detecting faults. This paper overviews recent investigations and developments in statistical-based FDD methods, focusing on probabilistic models. The theoretical background of these models is presented, including Bayesian learning and maximum likelihood. We then discuss various techniques and methodologies, e.g., probabilistic principal component analysis (PPCA), probabilistic partial least squares (PPLS), probabilistic independent component analysis (PICA), probabilistic canonical correlation analysis (PCCA), and probabilistic Fisher discriminant analysis (PFDA). Several test statistics are analyzed to evaluate the discussed methods. In industrial processes, these methods require complex matrix operation and cost computational load. Finally, we discuss the current challenges and future trends in FDD.

          Related collections

          Most cited references146

          • Record: found
          • Abstract: not found
          • Article: not found

          Bayes Factors

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Principal component analysis: a review and recent developments.

            Large datasets are increasingly common and are often difficult to interpret. Principal component analysis (PCA) is a technique for reducing the dimensionality of such datasets, increasing interpretability but at the same time minimizing information loss. It does so by creating new uncorrelated variables that successively maximize variance. Finding such new variables, the principal components, reduces to solving an eigenvalue/eigenvector problem, and the new variables are defined by the dataset at hand, not a priori, hence making PCA an adaptive data analysis technique. It is adaptive in another sense too, since variants of the technique have been developed that are tailored to various different data types and structures. This article will begin by introducing the basic ideas of PCA, discussing what it can and cannot do. It will then describe some variants of PCA and their application.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              A global geometric framework for nonlinear dimensionality reduction.

              Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs-30,000 auditory nerve fibers or 10(6) optic nerve fibers-a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.
                Bookmark

                Author and article information

                Contributors
                Journal
                SYMMAM
                Symmetry
                Symmetry
                MDPI AG
                2073-8994
                April 2024
                April 08 2024
                : 16
                : 4
                : 455
                Article
                10.3390/sym16040455
                72862a1c-4321-4e4a-975c-5ffd134d04ce
                © 2024

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article