Kapittel 4 Risks associated with using large language models in health and care services

Use of large language models has great potential to improve both efficiency and quality in health and care services, for example through structuring free text in patient records, machine-assisted health professional coding, decision support in treatment, knowledge support, predictions, sorting and triaging patient inquiries, automatic text production, clinical chatbots (questions and answers), citizen-facing chatbots (questions and answers), plain language assistance, speech recognition, health education, health research, translation, and logistics [18].

At the same time, language models have some inherent challenges that can affect both patient safety and the quality of health services, which in turn can affect overall trust in health and care services. Use of large language models is in an early phase, and there are still uncertainties related to benefits and gains in the short and long term. Researchers at the Simula Institute point out that "artificial intelligence (AI), and especially generative AI, is a collection of seductive technologies that, if one lacks good technological insight, can mislead decision-makers at all levels in organizations to make uninformed assessments" [19].

There are also uncertainties related to which areas large language models can be used in health and care services in a responsible and appropriate manner. Currently, they are best suited for linguistic and administrative tasks with low risk. Quality assurance is important regardless of use.

AI systems with large language models intended for providing healthcare have higher risk. They are most likely medical devices and thus regulated through the act on medical devices [20]. The purpose of this law is to prevent harmful effects, accidents, and incidents, and ensure that medical equipment is tested and used in a professionally and ethically sound manner. As of March 25, we are not aware of AI systems with generative language models that are CE-marked after conformity assessment performed by a notified body.

The AI Act has entered into force in the EU with the goal of establishing trust in AI use in the EU while facilitating innovation. The regulation has a risk-based approach, where requirements for an AI system are determined by the risk level of the intended use.

Furthermore, both users and developers of AI systems to be used in health and care services in Norway must comply with both general and sector-specific regulations. For example, sector-specific regulations such as Norwegian health legislation, the EU's General Data Protection Regulation, copyright law, and equality and anti-discrimination law apply.

This chapter describes both underlying challenges with large language models and risks related to use in health and care services. The included risks are not exhaustive.

4. 1. Risks in large language models

4. 2. Risks associated with use in health and care services

4. 3. Summary

Last update: 29. juli 2025