31
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Imaging of the pial arterial vasculature of the human brain in vivo using high-resolution 7T time-of-flight angiography

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The pial arterial vasculature of the human brain is the only blood supply to the neocortex, but quantitative data on the morphology and topology of these mesoscopic arteries (diameter 50–300 µm) remains scarce. Because it is commonly assumed that blood flow velocities in these vessels are prohibitively slow, non-invasive time-of-flight magnetic resonance angiography (TOF-MRA)—which is well suited to high 3D imaging resolutions—has not been applied to imaging the pial arteries. Here, we provide a theoretical framework that outlines how TOF-MRA can visualize small pial arteries in vivo, by employing extremely small voxels at the size of individual vessels. We then provide evidence for this theory by imaging the pial arteries at 140 µm isotropic resolution using a 7 Tesla (T) magnetic resonance imaging (MRI) scanner and prospective motion correction, and show that pial arteries one voxel width in diameter can be detected. We conclude that imaging pial arteries is not limited by slow blood flow, but instead by achievable image resolution. This study represents the first targeted, comprehensive account of imaging pial arteries in vivo in the human brain. This ultra-high-resolution angiography will enable the characterization of pial vascular anatomy across the brain to investigate patterns of blood supply and relationships between vascular and functional architecture.

          Related collections

          Most cited references177

          • Record: found
          • Abstract: found
          • Article: not found

          Deep learning.

          Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability.

            Active contour segmentation and its robust implementation using level set methods are well-established theoretical approaches that have been studied thoroughly in the image analysis literature. Despite the existence of these powerful segmentation methods, the needs of clinical research continue to be fulfilled, to a large extent, using slice-by-slice manual tracing. To bridge the gap between methodological advances and clinical routine, we developed an open source application called ITK-SNAP, which is intended to make level set segmentation easily accessible to a wide range of users, including those with little or no mathematical expertise. This paper describes the methods and software engineering philosophy behind this new tool and provides the results of validation experiments performed in the context of an ongoing child autism neuroimaging study. The validation establishes SNAP intrarater and interrater reliability and overlap error statistics for the caudate nucleus and finds that SNAP is a highly reliable and efficient alternative to manual tracing. Analogous results for lateral ventricle segmentation are provided.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              N4ITK: improved N3 bias correction.

              A variant of the popular nonparametric nonuniform intensity normalization (N3) algorithm is proposed for bias field correction. Given the superb performance of N3 and its public availability, it has been the subject of several evaluation studies. These studies have demonstrated the importance of certain parameters associated with the B-spline least-squares fitting. We propose the substitution of a recently developed fast and robust B-spline approximation routine and a modified hierarchical optimization scheme for improved bias field correction over the original N3 algorithm. Similar to the N3 algorithm, we also make the source code, testing, and technical documentation of our contribution, which we denote as "N4ITK," available to the public through the Insight Toolkit of the National Institutes of Health. Performance assessment is demonstrated using simulated data from the publicly available Brainweb database, hyperpolarized (3)He lung image data, and 9.4T postmortem hippocampus data.
                Bookmark

                Author and article information

                Contributors
                Role: Reviewing Editor
                Role: Senior Editor
                Journal
                eLife
                Elife
                eLife
                eLife
                eLife Sciences Publications, Ltd
                2050-084X
                29 April 2022
                2022
                : 11
                : e71186
                Affiliations
                [1 ] Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital ( https://ror.org/002pd6e78) Charlestown United States
                [2 ] Department of Radiology, Harvard Medical School Boston United States
                [3 ] Centre for Advanced Imaging, The University of Queensland ( https://ror.org/00rqy9422) Brisbane Australia
                [4 ] Department of Biomedical Magnetic Resonance, Institute of Experimental Physics, Otto-von- Guericke-University ( https://ror.org/00ggpsq73) Magdeburg Germany
                [5 ] High Field MR Centre, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna ( https://ror.org/03prydq77) Vienna Austria
                [6 ] Karl Landsteiner Institute for Clinical Molecular MR in Musculoskeletal Imaging Vienna Austria
                [7 ] Department of Neurology, Medical University of Graz ( https://ror.org/01faaaf77) Graz Austria
                [8 ] German Center for Neurodegenerative Diseases ( https://ror.org/043j0f473) Magdeburg Germany
                [9 ] Center for Behavioral Brain Sciences ( https://ror.org/03d1zwe41) Magdeburg Germany
                [10 ] Leibniz Institute for Neurobiology ( https://ror.org/01zwmgk08) Magdeburg Germany
                [11 ] Division of Health Sciences and Technology, Massachusetts Institute of Technology ( https://ror.org/042nb2s44) Cambridge United States
                University of Oxford ( https://ror.org/052gg0110) United Kingdom
                National Institute of Mental Health, National Institutes of Health ( https://ror.org/01cwqze88) United States
                University of Oxford ( https://ror.org/052gg0110) United Kingdom
                University of Oxford ( https://ror.org/052gg0110) United Kingdom
                Author information
                https://orcid.org/0000-0001-8242-8008
                https://orcid.org/0000-0001-5740-4522
                https://orcid.org/0000-0002-7263-9327
                https://orcid.org/0000-0002-1348-1179
                Article
                71186
                10.7554/eLife.71186
                9150892
                35486089
                ae6dfc05-b62c-4252-8498-9d5d867f1bc5
                © 2022, Bollmann et al

                This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

                History
                : 11 June 2021
                : 28 April 2022
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/100000070, National Institute of Biomedical Imaging and Bioengineering;
                Award ID: P41-EB015896
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/100000070, National Institute of Biomedical Imaging and Bioengineering;
                Award ID: P41-EB030006
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/100000070, National Institute of Biomedical Imaging and Bioengineering;
                Award ID: R01-EB019437
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/100000065, National Institute of Neurological Disorders and Stroke;
                Award ID: R21-NS106706
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/100000025, National Institute of Mental Health;
                Award ID: R01-MH111438
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/100000025, National Institute of Mental Health;
                Award ID: R01-MH111419
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/501100000038, Natural Sciences and Engineering Research Council of Canada;
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/501100003151, Fonds de recherche du Québec – Nature et technologies;
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/501100001659, Deutsche Forschungsgemeinschaft;
                Award ID: MA 9235/1-1
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/100000002, National Institutes of Health;
                Award ID: S10-RR019371
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/100000002, National Institutes of Health;
                Award ID: S10-OD02363701
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/501100000780, European Commission;
                Award ID: MS-fMRI-QSM 794298
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/100000049, National Institute on Aging;
                Award ID: RF1-AG074008
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/100000065, National Institute of Neurological Disorders and Stroke;
                Award ID: U19-NS123717
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/100000025, National Institute of Mental Health;
                Award ID: R01-MH124004
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/100000070, National Institute of Biomedical Imaging and Bioengineering;
                Award ID: R01-EB032746
                Award Recipient :
                The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
                Categories
                Research Article
                Neuroscience
                Custom metadata
                Partial-volume effects were found to be the current limit to imaging pial arteries with MRI, not their slow blood flow, and therefore advanced acquisition techniques achieving resolutions below 200 µm in vivo provide a more complete picture of these vessels.

                Life sciences
                magnetic resonance imaging,magnetic resonance angiography,ultra-high field,blood vessel,cerebrovasculature,blood flow,human

                Comments

                Comment on this article