16
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Künstliche Intelligenz in der Medizin: Von Entlastungen und neuen Anforderungen im ärztlichen Handeln

      , ,
      Ethik in der Medizin
      Springer Science and Business Media LLC

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Zusammenfassung

          Der folgende Beitrag untersucht, wie der Einsatz von Künstlicher Intelligenz (KI) in der Medizin einerseits dazu beitragen kann, Ärzt*innen einige Aufgaben abzunehmen und sie auf sachlicher Ebene zu unterstützen – wie durch diese KI-Anwendungen andererseits jedoch neue Anforderungen auf der sozialen Ebene ärztlichen Handelns entstehen. Entlang der ethischen wie sozialpsychologischen Konzepte Vertrauen, Nachvollziehbarkeit und Verantwortung wird auf konzeptioneller Ebene aufgezeigt, welche neuen Herausforderungen durch den Einsatz medizinischer KI-Anwendungen entstehen und dass diese primär durch Kommunikation bewältigt werden können. Die Notwendigkeit, diese Herausforderungen kommunikativ anzugehen, wird vor dem Hintergrund professionstheoretischer wie ethischer Überlegungen diskutiert. So kommen wir zu dem Schluss, dass der Einsatz medizinischer KI-Anwendungen zu einer Verschiebung im Anforderungsprofil von Ärzt*innen führen wird. Dabei wird der Fokus von rein fachlichen Kompetenzen auf eine stärkere Betonung der Kommunikationsfähigkeiten verlagert.

          Abstract

          Background

          The use of Artificial Intelligence (AI) has the potential to provide relief in the challenging and often stressful clinical setting for physicians. So far, however, the actual changes in work for physicians remain a prediction for the future, including new demands on the social level of medical practice. Thus, the question of how the requirements for physicians will change due to the implementation of AI is addressed.

          Methods

          The question is approached through conceptual considerations based on the potentials that AI already offer and the focus on central normative concepts of trust, explainability, and responsibility which play an important role when implementing AI in everyday clinical practice.

          Conclusion

          Interpersonal communication will not disappear upon implementation of AI. Instead, it is much more likely that the exchange between various actors in medical practice will become increasingly important. This adds another level of complexity to practical concepts such as Shared-Decision-Making, which must be addressed in empirical research, including the involvement of AI systems as actors in communication.

          Related collections

          Most cited references102

          • Record: found
          • Abstract: found
          • Article: not found

          Shared Decision Making: A Model for Clinical Practice

          The principles of shared decision making are well documented but there is a lack of guidance about how to accomplish the approach in routine clinical practice. Our aim here is to translate existing conceptual descriptions into a three-step model that is practical, easy to remember, and can act as a guide to skill development. Achieving shared decision making depends on building a good relationship in the clinical encounter so that information is shared and patients are supported to deliberate and express their preferences and views during the decision making process. To accomplish these tasks, we propose a model of how to do shared decision making that is based on choice, option and decision talk. The model has three steps: a) introducing choice, b) describing options, often by integrating the use of patient decision support, and c) helping patients explore preferences and make decisions. This model rests on supporting a process of deliberation, and on understanding that decisions should be influenced by exploring and respecting “what matters most” to patients as individuals, and that this exploration in turn depends on them developing informed preferences.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Causability and explainability of artificial intelligence in medicine

            Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black‐box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use‐case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system This article is categorized under: Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Factors Related to Physician Burnout and Its Consequences: A Review

              Physician burnout is a universal dilemma that is seen in healthcare professionals, particularly physicians, and is characterized by emotional exhaustion, depersonalization, and a feeling of low personal accomplishment. In this review, we discuss the contributing factors leading to physician burnout and its consequences for the physician’s health, patient outcomes, and the healthcare system. Physicians face daily challenges in providing care to their patients, and burnout may be from increased stress levels in overworked physicians. Additionally, the healthcare system mandates physicians to keep a meticulous record of their physician-patient encounters along with clerical responsibilities. Physicians are not well-trained in managing clerical duties, and this might shift their focus from solely caring for their patients. This can be addressed by the systematic application of evidence-based interventions, including but not limited to group interventions, mindfulness training, assertiveness training, facilitated discussion groups, and promoting a healthy work environment.
                Bookmark

                Author and article information

                Contributors
                Journal
                Ethik in der Medizin
                Ethik Med
                Springer Science and Business Media LLC
                0935-7335
                1437-1618
                March 2024
                November 14 2023
                March 2024
                : 36
                : 1
                : 7-29
                Article
                10.1007/s00481-023-00789-z
                5ea3aaf3-cc4b-4d6b-b89d-55e7c10f56bd
                © 2024

                https://creativecommons.org/licenses/by/4.0

                https://creativecommons.org/licenses/by/4.0

                History

                Comments

                Comment on this article