The National Health and Collaboration Plan 2024–2027 emphasizes that AI can play a key role in ensuring the sustainable development of Norway’s healthcare system. AI and other labor-saving technologies are expected to help maintain the quality of services in the coming years while also contributing to reduced waiting times [10].
This report reflects the state of knowledge as of the turn of the year 2024/2025. Given the rapid pace of development in the field of AI, the report will not be continuously updated to reflect emerging knowledge.
The work presented here stems from the subproject Frameworks for Quality Assurance and reflects the status at the end of 2024. As AI continues to evolve, new EU legislation is on the way that will influence how AI is used in Norway. In the time ahead, updated guidance materials will be needed as both knowledge and regulatory frameworks continue to develop. Communication, revisions, and further development of supportive tools will be carried out in accordance with the priorities of the Joint AI Plan [11].
The report describes the legal framework, the mandatory requirements and, to some extent, the best practices for ensuring that AI systems used in health and care services are trustworthy. The European Union identifies three components that a trustworthy AI system should meet throughout its lifecycle [12]:
- It should be legal, in compliance with all applicable laws and regulations.
- It should be ethical and ensure compliance with ethical principles and values.
- It should be robust, both from a technical and social perspective, as AI systems, even with good intentions, can cause unintended harm.
Trustworthy AI is characterised by the US National Institute for Standardization and Technology (NIST) as follows: Valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed [13].
This report uses the definition of an AI system as described in the AI Act, which defines an AI system as:
...a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments [14].
An AI system can take different forms: standalone applications on a PC or mobile device, web services, components of specialist clinical systems, or modules embedded in electronic health record systems.
AI also has inherent characteristics that make quality assurance more complex than for traditional IT systems. When procuring, implementing, and using AI, organizations must consider several critical factors, including:
- The black-box problem: AI models, especially those based on deep neural networks, can be difficult to interpret. In some cases, their logic and data may be inaccessible due to commercial restrictions. This lack of explainability can undermine trust and hinder use among healthcare professionals and may in turn affect patients’ right to understand how their health information is processed [15].
- Bias: are systematic errors that occur in AI systems due to biases in the training data. AI models reflect the data they are trained on and must be representative of the population the AI model will be used on. The data can also represent discrimination in society. Bias can cause AI models to provide incorrect, biased or discriminatory content [16]
- Hallucination: Generative AI models create content based on the data the model is trained on. This content may appear to be true and correct but may be untrue or fabricated. This presents a specific risk that organizations should be aware of when deploying such systems in healthcare settings.
Goals and target group
This report has been developed to support organizations planning to acquire an AI system, helping to ensure its safe and effective use. The goal is to provide guidance on identifying, assessing, and managing the risks associated with the procurement, implementation, and operation of fully developed AI systems in the healthcare sector. It highlights key issues that organizations should pay particular attention to assess whether an AI system can be considered trustworthy.
The target audience for the report includes individuals involved in assessments and decisions related to planning, acquiring, implementing, and managing AI systems in health and care services. The report may also be relevant for staff in organizations that use AI, suppliers of AI systems, and health authorities.
Delimitations
The report is limited to the procurement of fully developed products. It therefore does not cover in-house development of AI systems. Issues related to AI systems that are continuously learning are not specifically addressed. These are relevant topics for further work in the subproject.
Generative AI models are not dealt with separately, but another sub-project in the Joint AI plan will focus on assessing the risks associated with large language models and how to ensure that the use of language models is adapted to Norwegian conditions [17].
Relevant sources for the introduction
- A buyer's guide to AI in health and care (digital.nhs.uk) – NHS Transformation Directorate (england.nhs.uk)
- What is AI? Can you make a clear distinction between AI and non-AI systems? – OECD.AI
- Norwegian Digitalisation Agency: Guidance for responsible development and use of artificial intelligence in the public sector (digdir.no)
- Adoption of AI in healthcare is a comprehensive white paper, prepared by DNV in an international collaboration, describing what to consider in the procurement of artificial intelligence-based tools and can be read here: https://www.dnv.com/Publications/how-do-i-turn-this-on-what-to-consider-when-adopting-ai-based-tools-into-clinical-practice-237225
- The National Institute of Standards and Technology (NIST) has published a risk management framework for AI (NIST AI RMF 1.0), which describes how to work systematically with risk management of AI systems: Artificial Intelligence Risk Management Framework (AI RMF 1.0) (PDF) NIST also has an article on explainable AI: Four Principles of Explainable Artificial Intelligenc (PDF)
- The "Ethical Guidelines for Trustworthy Artificial Intelligence" (digital-strategy.ec.europa.eu) prepared by an expert group appointed by the European Commission mentions three main principles for responsible artificial intelligence, namely legal, ethical and safe artificial intelligence. The same principles are reflected in the government's national strategy for artificial intelligence (regjeringen.no) from 2020 and in Digital Norway of the future - national digitalisation strategy 2024-2030 (regjeringen.no)
- GENERATING EVIDENCE FOR ARTIFICIAL INTELLIGENCE-BASED MEDICAL DEVICES: A FRAMEWORK FOR TRAINING, VALIDATION AND EVALUATION (WHO , 2021) (PDF)
- The Code of Conduct for information security and data protection (Normen) has published an article on security risks in systems that use artificial intelligence and how to set security requirements for suppliers of such systems: Normen (Norwegian)
- FDA: U.S. Food and Drug Administration (fda.gov)
- ISO/IEC 22989 Artificial intelligence concepts and terminology (zip.) establishes a terminology for AI and describes concepts in the field of AI and Standard Norge | standard.no. ISO/IEC 22989:2022
- Built-in discrimination protection: A guide to detecting and preventing discrimination in the development and use of artificial intelligence (ldo.no)
[12] European Commission: Directorate-General for Communications Networks, Content and Technology, Ethics guidelines for trustworthy AI, Publications Office (data.europa.eu/, 2019
[13] Artificial Intelligence Risk Management Framework (AI RMF 1.0) (nist.gov) (PDF), page 12 and Trustworthy and Responsible AI | NIST: Characteristics of trustworthy AI systems include: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.