18
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Generative Adversarial Network for Infrared and Visible Image Fusion Based on Semantic Segmentation

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This paper proposes a new generative adversarial network for infrared and visible image fusion based on semantic segmentation (SSGAN), which can consider not only the low-level features of infrared and visible images, but also the high-level semantic information. Source images can be divided into foregrounds and backgrounds by semantic masks. The generator with a dual-encoder-single-decoder framework is used to extract the feature of foregrounds and backgrounds by different encoder paths. Moreover, the discriminator’s input image is designed based on semantic segmentation, which is obtained by combining the foregrounds of the infrared images with the backgrounds of the visible images. Consequently, the prominence of thermal targets in the infrared images and texture details in the visible images can be preserved in the fused images simultaneously. Qualitative and quantitative experiments on publicly available datasets demonstrate that the proposed approach can significantly outperform the state-of-the-art methods.

          Related collections

          Most cited references55

          • Record: found
          • Abstract: not found
          • Article: not found

          Image Quality Assessment: From Error Visibility to Structural Similarity

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

            We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet.
              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Densely connected convolutional networks

                Bookmark

                Author and article information

                Contributors
                Role: Academic Editor
                Journal
                Entropy (Basel)
                Entropy (Basel)
                entropy
                Entropy
                MDPI
                1099-4300
                21 March 2021
                March 2021
                : 23
                : 3
                : 376
                Affiliations
                [1 ]College of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430205, China; houjilei455@ 123456gmail.com (J.H.); whgcdxwuwei@ 123456163.com (W.W.)
                [2 ]Research Institute of Nuclear Power Operation, Wuhan 430000, China
                [3 ]Electronic Information School, Wuhan University, Wuhan 430072, China; jyma2010@ 123456gmail.com
                Author notes
                [* ]Correspondence: zhangdz02@ 123456cnnp.com.cn (D.Z.); zhouhuabing@ 123456gmail.com (H.Z.); Tel.: +86-13986201405 (H.Z.)
                Author information
                https://orcid.org/0000-0002-4597-0743
                https://orcid.org/0000-0003-3264-3265
                https://orcid.org/0000-0001-5007-7303
                Article
                entropy-23-00376
                10.3390/e23030376
                8004063
                b09220f9-e0eb-4ffd-8bbb-455f3a5da236
                © 2021 by the authors.

                Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).

                History
                : 19 February 2021
                : 17 March 2021
                Categories
                Article

                image fusion,semantic segmentation,generative adversarial network,infrared image,visible image

                Comments

                Comment on this article