Introduction

The National Health and Collaboration Plan 2024–2027 emphasizes that AI can play a key role in ensuring the sustainable development of Norway’s healthcare system. AI and other labor-saving technologies are expected to help maintain the quality of services in the coming years while also contributing to reduced waiting times [10].

This report reflects the state of knowledge as of the turn of the year 2024/2025. Given the rapid pace of development in the field of AI, the report will not be continuously updated to reflect emerging knowledge.

The work presented here stems from the subproject Frameworks for Quality Assurance and reflects the status at the end of 2024. As AI continues to evolve, new EU legislation is on the way that will influence how AI is used in Norway. In the time ahead, updated guidance materials will be needed as both knowledge and regulatory frameworks continue to develop. Communication, revisions, and further development of supportive tools will be carried out in accordance with the priorities of the Joint AI Plan [11].

The report describes the legal framework, the mandatory requirements and, to some extent, the best practices for ensuring that AI systems used in health and care services are trustworthy. The European Union identifies three components that a trustworthy AI system should meet throughout its lifecycle [12]:

  1. It should be legal, in compliance with all applicable laws and regulations.
  2. It should be ethical and ensure compliance with ethical principles and values.
  3. It should be robust, both from a technical and social perspective, as AI systems, even with good intentions, can cause unintended harm.

Trustworthy AI is characterised by the US National Institute for Standardization and Technology (NIST) as follows: Valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed [13].

This report uses the definition of an AI system as described in the AI Act, which defines an AI system as:

...a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments [14].

An AI system can take different forms: standalone applications on a PC or mobile device, web services, components of specialist clinical systems, or modules embedded in electronic health record systems.

AI also has inherent characteristics that make quality assurance more complex than for traditional IT systems. When procuring, implementing, and using AI, organizations must consider several critical factors, including:

  • The black-box problem: AI models, especially those based on deep neural networks, can be difficult to interpret. In some cases, their logic and data may be inaccessible due to commercial restrictions. This lack of explainability can undermine trust and hinder use among healthcare professionals and may in turn affect patients’ right to understand how their health information is processed [15].
  • Bias: are systematic errors that occur in AI systems due to biases in the training data. AI models reflect the data they are trained on and must be representative of the population the AI model will be used on. The data can also represent discrimination in society. Bias can cause AI models to provide incorrect, biased or discriminatory content [16]
  • Hallucination: Generative AI models create content based on the data the model is trained on. This content may appear to be true and correct but may be untrue or fabricated. This presents a specific risk that organizations should be aware of when deploying such systems in healthcare settings.

Goals and target group

This report has been developed to support organizations planning to acquire an AI system, helping to ensure its safe and effective use. The goal is to provide guidance on identifying, assessing, and managing the risks associated with the procurement, implementation, and operation of fully developed AI systems in the healthcare sector. It highlights key issues that organizations should pay particular attention to assess whether an AI system can be considered trustworthy.

The target audience for the report includes individuals involved in assessments and decisions related to planning, acquiring, implementing, and managing AI systems in health and care services. The report may also be relevant for staff in organizations that use AI, suppliers of AI systems, and health authorities.

Delimitations

The report is limited to the procurement of fully developed products. It therefore does not cover in-house development of AI systems. Issues related to AI systems that are continuously learning are not specifically addressed. These are relevant topics for further work in the subproject.

Generative AI models are not dealt with separately, but another sub-project in the Joint AI plan will focus on assessing the risks associated with large language models and how to ensure that the use of language models is adapted to Norwegian conditions [17]

Relevant sources for the introduction

 

 

 

[12] European Commission: Directorate-General for Communications Networks, Content and Technology, Ethics guidelines for trustworthy AI, Publications Office (data.europa.eu/, 2019

[13] Artificial Intelligence Risk Management Framework (AI RMF 1.0) (nist.gov) (PDF), page 12 and Trustworthy and Responsible AI | NIST: Characteristics of trustworthy AI systems include: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.

Last update: 23. mai 2025