In addition to standard assessments conducted when introducing standard ICT systems, such as Data Protection Impact Assessment (DPIA) and Risk and Vulnerability Assessment, this phase may require an AI-specific impact assessment to dentify other and unique risks related to AI [119].
A DPIA will often be required when using AI, as it constitutes a new and innovative technology (in accordance with the GDPR [120]. When the AI Act enters into force, organizations providing services to the public will be required to conduct a Fundamental Rights Impact Assessment (FRIA) for high-risk AI systems used in areas that may affect human rights [121].
A Fundamental Rights Impact Assessment (FRIA) is a detailed analysis of:
- The organization's processes
- How will the AI system be used in practice?
- Time period and frequency
- How long and how often will the system be used?
- Affected individuals
- Which categories of individuals or groups are likely to be affected?
- Specific risks
- What risks of harm may arise for the affected individuals?
- Measures for human oversight
- How will you ensure human control and monitoring of the system?
- Measures to manage risk
- What measures will be implemented to minimise or eliminate the identified risks?
The purpose of a FRIA is to protect fundamental rights, strengthen trust in AI, and prevent adverse outcomes.
An AI impact assessment may include (but is not limited to) issues related to patient safety, fairness, transparency, explainability, cybersecurity, information security, privacy, financial impact, accessibility and human rights, and the potential for malicious use of the system [122]. There may be some overlap between such an AI impact assessment, DPIA and FRIA. It may therefore be appropriate to conduct these assessments in parallel. There are several sources of information on how to conduct an impact assessment, and various templates [123] 124] [125].The Danish Data Protection Agency has published a template intended for a privacy impact assessment when using personal data in AI systems [126].
The National Institute of Standards and Technology has published a risk framework for artificial intelligence Information Technology Laboratory AI Risk Management Framework (nist.gov). This framework can be used to build a structure for managing risks associated with artificial intelligence and provides an overview of the different types of risk that should be considered.
These assessments form the basis of the organisation's risk management plan and determine which additional measures must be implemented beyond those described in the AI system's user maual [127][128].
[119] Companies must comply with the requirements of the Personal Data Act and the General Data Protection Regulation, cf. Lov om behandling av personopplysninger (personopplysningsloven) (lovdata.no)
[120] GDPR article 35: Data protection impact assessment (eur-lex.europa.eu)
[121] Fundamental rights impact assessment (FRIA), Article 27 of the AI Act
[122] See also ISO/IEC 42001:2023: Management system for artificial intelligence.
[128] Article 26 of the AI Act