5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Book Chapter: not found
      The Semantic Web – ISWC 2020: 19th International Semantic Web Conference, Athens, Greece, November 2–6, 2020, Proceedings, Part II 

      Dynamic Faceted Search for Technical Support Exploiting Induced Knowledge

      other

      Read this book at

      Buy book Bookmark
          There is no author summary for this book yet. Authors can add summaries to their books on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references11

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Interrater reliability: the kappa statistic

          The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.
            Bookmark
            • Record: found
            • Abstract: not found
            • Book Chapter: not found

            DBpedia: A Nucleus for a Web of Open Data

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              SuperAgent: A Customer Service Chatbot for E-commerce Websites

                Bookmark

                Author and book information

                Book Chapter
                2020
                November 01 2020
                : 683-699
                10.1007/978-3-030-62466-8_42
                d6c18983-3da9-4a17-aff1-c6dd0e52d121
                History

                Comments

                Comment on this book

                Book chapters

                Similar content1,338