Logging and incident management

For medical devices, the supplier or manufacturer is obligated to monitor AI systems that are CE-marked under the Medical Device Regulation (MDR), even after they have been deployed (post-market surveillance). This includes responsibility for collecting and evaluating feedback from users of the device. The purpose is to ensure continued regulatory compliance, maintain safety and performance standards, and identify any need for immediate corrective actions [157].

The AI Act will impose similar requirements for high-risk AI systems, and users are also expected to contribute to fulfilling these obligations [158]. The organization must ensure that the AI system’s automatic logs are retained and properly managed [159]. This post-market surveillance provides systematic documentation of performance and safety issues observed during operational use, supporting error correction and system improvement. Regular reporting to relevant authorities may be required. The designated authority for AI system market surveillance in Norway has not yet been determined [160].

Adverse events

It is essential to establish clear procedures for where and how incidents should be reported once the AI system is in use. The organization should ensure that its internal incident reporting systems are adapted to support reporting of AI-related incidents, whether clinical or technical, to capture all AI relevant events.

Incidents in the health and care service can be reported through melde.no. Patients can also use this to report adverse events. Alternatively, patients can contact HelseNorge.no directly.

Incidents that are classified as serious adverse events must be reported. For AI systems, the organization is obligated to report serious incidents that pose a risk to the supplier without delay, in accordance with Article 26(5) of the AI Act.

The Organisation for Economic Co-operation and Development (OECD) has established a global incident monitoring database for reporting AI-related adverse events: AI Incidents Monitor (oecd.ai)

Quality improvement

An effective quality management system enables the organisation to monitor the AI system so that declines in model performance or other adverse events can be identified in a timely manner. This allows the organisation to manage deviations, learn from them, and implement necessary corrective actions. Such actions may include requesting updates from the supplier, phasing out the existing AI system, or procuring a new one. Continuous quality improvement may also involve adjusting workflows, training, communication, or other operational routines.

 

 

 

[157] Including articles 83 of the Medical Devices Regulation (MDR): REGULATION (EU) 2017/745 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL (PDF)and Article 78 of the Regulation on in vitro diagnostic medical devices: REGULATION (EU) 2017/746 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL (PDF)

[159] Article 26 of the AI Act on business requirements:  Obligations of deployers of high-risk AI systems (eur-lex.europa.eu)

[160] The Norwegian Agency for Public Management and Financial Management (DFØ) recommends that The Norwegian Communications Authority should be the national supervisory body, market surveillance authority and act as a point of contact with the EU regarding the AI Act. Why should Nkom be responsible for AI? (cw.no)

Last update: 23. mai 2025