Transformer Models in Healthcare: A Survey and Thematic Analysis of Potentials, Shortcomings and Risks

Denecke, Kerstin; May, Richard; Rivera-Romero, Octavio (2024). Transformer Models in Healthcare: A Survey and Thematic Analysis of Potentials, Shortcomings and Risks Journal of Medical Systems, 48(23), pp. 1-11. SpringerLink 10.1007/s10916-024-02043-5

[img]
Preview
Text
transformer_models_potential.pdf - Published Version
Available under License Creative Commons: Attribution (CC-BY).

Download (1MB) | Preview

Large Language Models (LLMs) such as General Pretrained Transformer (GPT) and Bidirectional Encoder Representa- tions from Transformers (BERT), which use transformer model architectures, have significantly advanced artificial intel- ligence and natural language processing. Recognized for their ability to capture associative relationships between words based on shared context, these models are poised to transform healthcare by improving diagnostic accuracy, tailoring treatment plans, and predicting patient outcomes. However, there are multiple risks and potentially unintended conse- quences associated with their use in healthcare applications. This study, conducted with 28 participants using a qualitative approach, explores the benefits, shortcomings, and risks of using transformer models in healthcare. It analyses responses to seven open-ended questions using a simplified thematic analysis. Our research reveals seven benefits, including improved operational efficiency, optimized processes and refined clinical documentation. Despite these benefits, there are significant concerns about the introduction of bias, auditability issues and privacy risks. Challenges include the need for specialized expertise, the emergence of ethical dilemmas and the potential reduction in the human element of patient care. For the medical profession, risks include the impact on employment, changes in the patient-doctor dynamic, and the need for extensive training in both system operation and data interpretation.

Item Type:

Journal Article (Original Article)

Division/Institute:

School of Engineering and Computer Science > Institute for Patient-centered Digital Health
School of Engineering and Computer Science > Institute for Patient-centered Digital Health > AI for Health
School of Engineering and Computer Science

Name:

Denecke, Kerstin0000-0001-6691-396X;
May, Richard and
Rivera-Romero, Octavio

Subjects:

Q Science > Q Science (General)
R Medicine > R Medicine (General)

ISSN:

1573-689X

Publisher:

SpringerLink

Language:

English

Submitter:

Kerstin Denecke

Date Deposited:

21 Feb 2024 14:41

Last Modified:

21 Feb 2024 14:41

Publisher DOI:

10.1007/s10916-024-02043-5

Related URLs:

Uncontrolled Keywords:

Large Language Model Transformer Models Artificial Intelligence Healthcare Generative Artificial Intelligence

ARBOR DOI:

10.24451/arbor.21277

URI:

https://arbor.bfh.ch/id/eprint/21277

Actions (login required)

View Item View Item
Provide Feedback