This report is intended to support organizations planning to acquire an AI system, ensuring that such systems can be used safely and effectively. It outlines the key considerations organizations must address to assess whether an AI system is trustworthy when procuring, implementing, and using AI in health and care services.
The US National Institute of Standards and Technology (NIST) characterises trustworthy AI as follows: Valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed [1].
In 2019, the European Union published ethical guidelines describing three core components of trustworthy AI that should be upheld throughout a system’s lifecycle:
(1) it should be legal, complying with all applicable laws and regulations, (2) it should be ethical, ensuring adherence to ethical principles and values, and (3) it should be robust, both from a technical and social perspective, as AI systems, even with good intentions, can cause unintended harm [2].
This report describes the current legal framework, outlining what is required by applicable regulations and, in part, what is considered good practice. However, AI is a rapidly evolving field, with new legislation on the horizon. The more specific boundaries between the EU's regulations on medical devices, data protection and artificial intelligence need to be clarified. In the time to come, updated guidance material will be necessary as knowledge and new regulations take shape.
AI systems can take different forms: a stand-alone application on a PC or mobile application, a web service, or components integrated into clinical systems or electronic health records.
With this in mind, the report presents some overarching aspects that organizations must consider. We have structured these into six phases covering procurement, implementation and use of AI.
Legal framework: Adopting AI systems involves assessments related to clinical responsibility, data protection, information security, access to health data, data management and automated decision-making processes.
Health legislation contains obligations and rights for health and care services and patients. Key principles are the right to health care, the requirement for medically sound services and health professionals' duty of confidentiality. The processing of health information is also covered by the General Data Protection Regulation (GDPR), which imposes several requirements for the data controller. The Equality and Anti-Discrimination Act is also relevant [3]. The Medical Devices Regulations regulates manufacturers' responsibilities for ensuring the safety of medical devices, including those that incorporate or consist of AI [4]. The Artificial Intelligence Act aims to ensure that products and systems that utilise AI are safe to use [5]. The regulations impose requirements on both manufacturers and users of AI (which may be the health and care services). It is also important to identify and apply any relevant domain specific regulations when procuring, validating, and implementing AI systems.
Risk management: While AI has the potential to enhance healthcare delivery, it also introduces new challenges. Without adequate risk assessment, AI systems may produce, reinforce, or perpetuate unjust or undesirable outcomes for individuals, health services, or society, potentially compromising patient safety.
Validation: Prior to implementation, AI models must be quality assured, and this may include clinical validation. In this report, clinical validation means confirming that the AI model performs as intended, for a specific intended purpose [6]. If the intended purpose of an AI model is to detect fractures on X-rays, it must be validated precisely for that use case.
Ethical principles: AI systems must be both safe and ethically acceptable. The European Commission’s expert group has proposed seven ethical principles, which are also reflected in the Norwegian government’s AI strategy [7]. The strategy emphasizes that AI must be based on ethical principles, ensure transparency, and respect human rights and democratic values [8]. At the same time, it may be ethically questionable not to use AI systems if they can provide better quality, streamline services and thus free up resources for other important tasks.
Six phases for the procurement, implementation and use of AI
This report is divided into six phases that an organization can use as a starting point when considering the procurement and implementation of an AI system. The phases are briefly described in the list below. While many of the phases involve similar quality or risk-related topics, these are addressed with increasing depth and specificity. For low-risk AI applications, the organization does not need to address all aspects in the same detail. As new information emerges, it may be necessary to revisit earlier phases and update or repeat parts of the process.
AI fact sheets: In-depth information on selected topics covered in the report can be found in the AI fact sheets published on the inter-agency information site on AI in healthcare [9].
[1] Artificial Intelligence Risk Management Framework (AI RMF 1.0) (nist.gov), page 12 and Trustworthy and Responsible AI | NIST: Characteristics of trustworthy AI systems include: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.
[2] European Commission: Directorate-General for Communications Networks, Content and Technology, Ethics guidelines for trustworthy AI, Publications Office, 2019, https://data.europa.eu/doi/10.2759/346720
[3] Act on Equality and Prohibition of Discrimination (Equality and Anti-Discrimination Act) – Lovdata
[5] The EU adopted a law on artificial intelligence (the AI Regulation) in 2024. https://www.iso.org/obp/ui/
[6] https://www.iso.org/obp/ui/#iso:std:iso:9000:ed-4:v1:en In the ISO 9000:2015 standard "Quality Management System - Fundamentals and vocabulary", "validation" is defined as: "Confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled."
[9] AI fact sheets will be published on Kunstig intelligens – Helsedirektoratet