Any organization operating within the health and care sector is required to provide professionally sound health and care services and to ensure good patient safety [131]. Therefore, the AI system should be tested using representative data from the organization to ensure the quality of its accuracy, precision, and reliability. The fact that an AI system is CE-marked does not necessarily mean that it will perform well in practice on the organization’s specific patient population, equipment, or examination protocols.
The term validation can have several meanings depending on the context. Validation of AI models refers to the process of evaluating a model’s performance and provides a measure of how well the model generalizes to unseen data.
Terms commonly used in relation to AI include internal validation, which is performed by the developer; external validation, which is carried out by an independent party; technical validation, which takes place during the development process; and clinical validation, which is conducted in a clinical setting after the AI model has been fully developed.
Validation can be performed either retrospectively, using previously collected data, possibly gathered for a different purpose, or prospectively, using data from a real-world use setting.
The black box problem with AI refers to the challenge that advanced AI models, especially those based on deep neural networks, can produce results without clearly explaining how those results were derived. This lack of transparency and explainability fundamentally conflicts with the core principles of evidence-based medical diagnostics and treatment. In addition, the internal logic of the model is often inaccessible to the organization for commercial reasons [132]. The organisation must therefore ensure that the AI system is safe to use. Given the difficulty in explaining how an AI model functions, clinical validation using the organization’s own data remains a key method for assessing how well the system performs within the target patient population.
Validation using representative datasets is performed for detecting potential biases in an AI model early in the process, enabling the organization to address any weaknesses before the system is deployed in clinical practice. If an AI model fail to provide adequate performance for a population group, the organisation must develop a plan to ensure that appropriate care is provided to this group [133].
The extent of clinical validation required depends on several factors, including how the AI model was trained, the nature of the training data, and the intended clinical application. When an organization adopts an AI system that has already been implemented and validated in another institution with a similar patient population, identical equipment, and comparable clinical protocols, the scope of additional clinical validation may be limited. Conversely, if the AI system is intended to operate with a high degree of autonomy, more comprehensive clinical validation will be necessary. Validation of AI systems should be risk-based and be repeated, either in full or in part, whenever the supplier releases updates or new versions of the AI system.
Kortesniemi et al. have recently published an article in the field of radiology outlining practical approaches for testing AI systems prior to clinical implementation [134]. The methods presented in the article may also prove useful for other clinical specialties.
Regulations governing access to data for validation
The healthcare organization is responsible for conducting the clinical validation and will typically act as the data controller for the data used in this process. Health data must be handled in accordance with confidentiality regulations. While consent from individual patients can serve as a legal basis for data use, this is often impractical or unfeasible when large volumes of data are involved.
As a data controller, the organisation must have a legal basis for processing and an exemption from the duty of confidentiality if patient data from medical records is to be used for purposes other than direct patient care (§ 20 of the Patient Records Act [135]) in the organisation's clinical validation process. Internal quality assurance aimed at verifying that the quality of care remains equivalent or improves with the use of the AI system may provide such an exemption under the statutory exception found in the Health Personnel Act § 26 [136].
This legal basis may also apply to collaborating organizations, provided there is a formal agreement as described in Section 9 of the Patient Records Act [137]. If multiple organizations have jointly procured an AI system and there is a need to share information between them, the exemption provision in the Health Personnel Act § 29 may be relevant, allowing for the use of data for quality assurance purposes [138][139]. A formal exemption decision provides both an exception to the duty of confidentiality and a legal basis for the use of health information from patient records and other treatment-related health registers for purposes such as developing, testing and using clinical decision support tools, as well as quality assurance.
It is often necessary or appropriate for the supplier of the AI model to play a role in the validation process conducted within the healthcare organization. As a rule, the supplier will be a data processor for the organization in this situation, in that it processes data on behalf of the organization. In such cases, the GDPR requires that a data processing agreement will be established between the organization and the supplier for this purpose [140]. This agreement must be in place before any processing of personal data begins, see also phase 4 regarding contractual arrangements.
Assessment of the AI system's performance will be part of the validation process, refer to AI fact sheet 3. For further description of validation, see AI fact sheet 2 [141].
[133] Algorithms, artificial intelligence and discrimination : An analysis of the possibilities and limitations of the Equality and Anti-Discrimination Act (PDF)
[135]Act relating to the processing of health information in connection with the provision of health care (Patient Records Act) – Lovdata
[137]Act relating to the processing of health information in connection with the provision of health care (the Patient Records Act) – Chapter 2: Patient records and other treatment-related health registers - Lovdata
[138] More information about section 29 in the Norwegian Directorate of Health's circular to the Health Personnel Act: Duty of confidentiality and right to information – Norwegian Directorate of Health
[139] Read more about how to apply for an exemption: Application for exemption from confidentiality – Norwegian Directorate of Health
[141] AI fact sheets will be published on Kunstig intelligens – Helsedirektoratet