44
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Convolutional neural networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they have not demonstrated sufficiently accurate and robust results for clinical use. In addition, they are limited by the lack of image-specific adaptation and the lack of generalizability to previously unseen object classes (a.k.a. zero-shot learning). To address these problems, we propose a novel deep learning-based interactive segmentation framework by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. We propose image-specific fine tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised (without additional user interactions) or supervised (with additional scribbles). We also propose a weighted loss function considering network and interaction-based uncertainty for the fine tuning. We applied this framework to two applications: 2-D segmentation of multiple organs from fetal magnetic resonance (MR) slices, where only two types of these organs were annotated for training and 3-D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only the tumor core in one MR sequence was annotated for training. Experimental results show that: 1) our model is more robust to segment previously unseen objects than state-of-the-art CNNs; 2) image-specific fine tuning with the proposed weighted loss function significantly improves segmentation accuracy; and 3) our method leads to accurate results with fewer user interactions and less user time than traditional interactive segmentation methods.

          Related collections

          Most cited references29

          • Record: found
          • Abstract: found
          • Article: not found

          DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

          In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).

            In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              U-Net: Convolutional Networks for Biomedical Image Segmentation

              There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .
                Bookmark

                Author and article information

                Journal
                IEEE Trans Med Imaging
                IEEE Trans Med Imaging
                0048700
                TMI
                ITMID4
                Ieee Transactions on Medical Imaging
                IEEE
                0278-0062
                1558-254X
                July 2018
                26 January 2018
                : 37
                : 7
                : 1562-1573
                Affiliations
                [1 ]divisionWellcome EPSRC Centre for Interventional and Surgical Sciences, departmentDepartment of Medical Physics and Biomedical Engineering, institutionUniversity College London; LondonWC1E 6BTU.K.
                [2 ]departmentDepartment of Medical Physics and Biomedical Engineering, institutionUniversity College London; LondonWC1E 6BTU.K.
                [3 ]divisionFacultad de Medicina, institutionUniversidad Nacional de Colombia; Bogotá111321Colombia
                [4 ]institutionAmadeus S.A.S.; 06560Sophia-AntipolisFrance
                [5 ]departmentDepartment of Radiology, institutionUniversity Hospitals KU Leuven; 3000LeuvenBelgium
                [6 ]divisionWellcome EPSRC Centre for Interventional and Surgical Sciences, institutionInstitute for Women’s Health, University College London; LondonWC1E 6BTU.K.
                [7 ]departmentDepartment of Obstetrics and Gynaecology, institutionKU Leuven; 3000LeuvenBelgium
                [8 ]institutionKU Leuven; 3000LeuvenBelgium
                Author notes
                Article
                10.1109/TMI.2018.2791721
                6051485
                29969407
                15d3d7fa-d28f-459e-88ef-b0aef4623e1c
                This work is licensed under a Creative Commons Attribution 3.0 License. For more information, see http://creativecommons.org/licenses/by/3.0/
                History
                : 11 October 2017
                : 04 January 2018
                : 05 January 2018
                : 30 June 2018
                Page count
                Figures: 13, Tables: 6, Equations: 170, References: 37, Pages: 12
                Funding
                Funded by: Wellcome Trust, fundref 10.13039/100010269;
                Award ID: WT101957
                Award ID: WT97914
                Award ID: HICF-T4-275
                Funded by: Engineering and Physical Sciences Research Council, fundref 10.13039/501100000266;
                Award ID: NS/A000027/1
                Award ID: EP/H046410/1
                Award ID: EP/J020990/1
                Award ID: EP/K005278
                Award ID: NS/A000050/1
                Funded by: Engineering and Physical Sciences Research Council, fundref 10.13039/501100000266;
                Award ID: 203145Z/16/Z
                Funded by: Royal Society, fundref 10.13039/501100000288;
                Award ID: RG160569
                Funded by: University College London, fundref 10.13039/501100000765;
                Funded by: Great Ormond Street Hospital Charity, fundref 10.13039/501100001279;
                Funded by: University College London, fundref 10.13039/501100000765;
                Funded by: Nvidia, fundref 10.13039/100007065;
                Funded by: Emerald, a GPU-accelerated High Performance Computer, made available by the Science and Engineering South Consortium operated in partnership with the STFC Rutherford-Appleton Laboratory;
                This work was supported in part by the Wellcome Trust under Grant WT101957, Grant WT97914, and Grant HICF-T4-275, the EPSRC under Grant NS/A000027/1, Grant EP/H046410/1, Grant EP/J020990/1, Grant EP/K005278, and Grant NS/A000050/1, the Wellcome/EPSRC under Grant 203145Z/16/Z, the Royal Society under Grant RG160569, the National Institute for Health Research University College London (UCL) Hospitals Biomedical Research Centre, the Great Ormond Street Hospital Charity, UCL ORS and GRS, NVIDIA, and Emerald, a GPU-accelerated High Performance Computer, made available by the Science and Engineering South Consortium operated in partnership with the STFC Rutherford-Appleton Laboratory.
                Categories
                Article

                interactive image segmentation,convolutional neural network,fine-tuning,fetal mri,brain tumor

                Comments

                Comment on this article

                scite_
                774
                0
                402
                0
                Smart Citations
                774
                0
                402
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content113

                Cited by161

                Most referenced authors425