0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Analyzing Dataset Annotation Quality Management in the Wild

      , ,
      Computational Linguistics
      MIT Press

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Data quality is crucial for training accurate, unbiased, and trustworthy machine learning models as well as for their correct evaluation. Recent work, however, has shown that even popular datasets used to train and evaluate state-of-the-art models contain a non-negligible amount of erroneous annotations, biases, or artifacts. While practices and guidelines regarding dataset creation projects exist, to our knowledge, large-scale analysis has yet to be performed on how quality management is conducted when creating natural language datasets and whether these recommendations are followed. Therefore, we first survey and summarize recommended quality management practices for dataset creation as described in the literature and provide suggestions for applying them. Then, we compile a corpus of 591 scientific publications introducing text datasets and annotate it for quality-related aspects, such as annotator management, agreement, adjudication, or data validation. Using these annotations, we then analyze how quality management is conducted in practice. A majority of the annotated publications apply good or excellent quality management. However, we deem the effort of 30% of the studies as only subpar. Our analysis also shows common errors, especially when using inter-annotator agreement and computing annotation error rates.

          Related collections

          Most cited references129

          • Record: found
          • Abstract: not found
          • Article: not found

          The Measurement of Observer Agreement for Categorical Data

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            A Coefficient of Agreement for Nominal Scales

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              STATISTICAL METHODS FOR ASSESSING AGREEMENT BETWEEN TWO METHODS OF CLINICAL MEASUREMENT

                Bookmark

                Author and article information

                Journal
                Computational Linguistics
                MIT Press
                0891-2017
                1530-9312
                2024
                September 01 2024
                September 01 2024
                2024
                September 01 2024
                September 01 2024
                : 50
                : 3
                : 817-866
                Article
                10.1162/coli_a_00516
                514de898-3e1a-4d7a-a8cf-8539f712d1b7
                © 2024

                https://creativecommons.org/licenses/by-nc-nd/4.0/

                History

                Comments

                Comment on this article