Providers of AI systems are generally required to carry out comprehensive risk assessments before placing their products on the market. Healthcare organizations that adopt AI systems must conduct their own internal risk assessments, taking into account how an AI system might impose a risk in interaction with their existing systems, procedures, and staff.
Even if there is insufficient information early in the process to conduct a full risk assessment, it is still crucial to identify any potential significant negative consequences. These could ultimately outweigh the intended benefits. Relevant assessment points include:
- whether it fulfils the requirement for sound health care services
- how it will impact the organization's expertise and comptence
- whether it will lead to overuse of healthcare services
- whether it will affect patient safety
- whether it could introduce discriminatory outcomes [19]
- whether it could pose a risk to data privacy
Procuring an AI system that has already been adopted successfully in similar organizations will generally be less risky than acquiring a completely unknown AI system.
[19] It can be difficult to detect discrimination if you don't have the necessary data, see Heart Room for Ethical AI (datatilsynet.no), a sandbox project with Ahus. See also Built-in discrimination protection: A guide to detecting and preventing discrimination in the development and use of artificial intelligence (ldo.no)