Human factors

As with the introduction of any other technology, human factors play an important role in determining whether an AI system will function effectively in practice in the organisation. AI systems are socio-technical in nature, meaning they are influenced by the interaction between medical technology, clinical practice and human factors, including the behaviour and decisions of patients and healthcare professionals [91]. The benefits and risks associated with an AI system will vary depending on the context and how it is used [92].

Ethical assessments and guidelines

An organisation’s norms, values, and behavioural culture form the foundation for the ethically responsible use of AI. In the specialist health services, clinical ethics committees are available and may be consulted as needed. In the municipal sector, KS (The Norwegian Association of Local and Regional Authorities) collaborates with the Centre for Medical Ethics to support municipalities in navigating ethical questions in healthcare. The organisation may also have internal policies or ethical guidelines that restrict or govern the use of AI, for example, prohibiting the use of AI for specific purposes.

Some ethical questions that may arise include:

  • How will patients respond to knowing that decisions affecting their care are informed by an AI system and not solely by a human professional?
  • Is the system fair? Does it support equitable access to health and care services, regardless of diagnosis, location, financial situation, gender, country of origin, ethnicity, or individual life circumstances? Does it promote equality in quality and outcomes across different patient groups?

Organisational leadership

It may be beneficial to consider implementing an AI management system, such as ISO/IEC 42001 Management system for artificial intelligence (standard.no). This standard can help organisations identify and reduce risks associated with the implementation of AI.

The organisation should assess the changes that the introduction of an AI system will bring to existing work processes. This ensures that leadership is well-informed about how the deployment of an AI system will impact staff and the services provided.

This assessment may include:

  • Identifying affected stakeholders and their expectations, including:
    • Potential changes in workload and/or routines for healthcare personnel during and after implementation
  • Assessing the need for resources and training for those who will both use and manage the AI system, including:
    • How to handle the system’s limitations
    • Whether new roles, responsibilities, or collaborations are required
    • Whether there is a need for skills development or capacity building
  • Planning for unanticipated situations and their consequences, such as temporary cost increases if:
    • An existing system needs to run in parallel with the new AI system
    • The AI system experiences temporary downtime

When the AI Act enters into force, organisations that intend to implement certain types of high-risk AI systems will be required to conduct a Fundamental Rights Impact Assessment (FRIA) [93]. This assessment is intended to help identify any potential adverse impacts the AI system may have on fundamental rights. Based on the findings, the organisation must implement appropriate measures to safeguard those rights and ensure compliance with the other requirements of the AI Act.

Users of the AI system

The Health Personnel Act requires that healthcare professionals provide safe and responsible care, and it is essential that they possess the necessary qualifications and competence to use the AI system as intended. They must feel confident that the AI system will effectively support their work processes and should therefore be involved in the procurement process. Their engagement, ownership, and expertise are crucial for ensuring acceptance and proper use of the AI system. It is important to identify their needs for training and support during the implementation process, as well as any resistance to changes in work routines.

How the use of AI as decision support impacts the competence of those using the system is also an important consideration. The organisation must ensure that the competence of healthcare professionals using AI is maintained over time, in line with the Regulations on Management and Quality Improvement in the Health and Care Services [94]. It may be relevant to establish mechanisms to address this, such as routines that help healthcare personnel avoid or reduce the risk of automation bias—that is, an overreliance on AI tools that may undermine professional judgement and the quality of care.

The AI Act will introduce requirements for transparency, including the need for AI systems to be interpretable, and for it to be clear to users that they are interacting with an AI system [95]. In addition, under the AI Act, employers will be obligated to inform employees when AI systems are introduced in the workplace [96].

Individuals the AI system will be used on

AI systems are often used to provide advice or decision support that directly concerns citizens, patients, and other service recipients. The organisation should plan how to provide clear and appropriate information to these groups, recognising that the use of AI may generate both enthusiasm and scepticism.

The use of an AI system can lead to perceptible changes in how a service is experienced—by citizens, healthcare personnel, patients, and service users alike. Transparency and clear communication about the goals and purposes of AI use, and how it supports healthcare delivery, may be essential to building public trust in the use of AI in health and care services.

  • A survey from the United Kingdom found that nine out of ten adults viewed the use of AI for cancer detection positively. However, 56% were concerned about relying too heavily on technology rather than professional judgement, and 47% were worried about the difficulty of determining responsibility when something goes wrong [97].

Norway’s National AI Strategy also emphasises that transparency includes the right to know when one is interacting with an AI system. Individuals have the right to be informed when they are interacting with an AI system [98].

Under the Patients’ and Users’ Rights Act, patients have the right to participate in decisions about their care and to receive information about the content of their healthcare, including potential risks [99]. The GDPR requires that individuals are informed about how their personal data is processed. Organisations should consider informing individuals which health data is shared with the AI system and for what purposes [100].

The organisation should assess whether changes to internal work processes are needed to safeguard patients’ rights when implementing an AI system. A user advisory committee may be consulted in this process.

 

 

 

[93] Article 27 of the CI Regulation. A FRIA shall include, among other things, a description of the processes in which the AI system will be used, the time period and frequency of use, persons or groups that may be affected, specific risks of harm, description of human supervision and other risk mitigation measures, and handling of complaints.

[95] Article 13 of the AI Act.

[96] Article 26 of the AI Act

[97] Ada Lovelace Institute and The Alan Turing Institute, How do people feel about AI? A nationally representative survey of public attitudes to artificial intelligence in Britain (2023). Available at: How do people feel about AI? (2023) (adalovelaceinstitute.org)

[99] Chapter 3 of the Patient and User Rights Act

[100] General Data Protection Regulation art. 13 and 14

Last update: 23. mai 2025