Requirements specification

The groundwork established in earlier phases forms the basis for developing the requirements specification for the procurement. If an early market dialogue was conducted during Phase 2, it will have provided insight into available products and their capabilities and limitations.

A requirements specification is a detailed document outlining the specific requirements the system must meet. It is important to consider the appropriate level of detail; overly specific requirements may limit the range of potential solutions. It can be useful to formulate user stories and allow suppliers to propose how the needs can be met, or alternatively to provide a detailed description of how the AI system should be configured and operate.

The organization must require the manufacturer to provide documentation that enables a thorough assessment of the AI system’s performance, trustworthiness, and safety before the contract is signed.

The upcoming AI Act includes requirements for AI systems classified as limited or high risk. It is advisable to take these into account when specifying requirements, in addition to other regulatory obligations [112]. We refer to phase 2. Article 11 of the AI Act, along with Annex IV, describes in more detail what documentation is required for a high-risk AI system [113][114].

The requirements specification contains requirements that are divided into different groups:

  1. General requirements
  2. Functional requirements
  3. Technical requirements
  4. Safety requirements

General requirements

General requirements may include that the supplier must address ethical and societal considerations related to the AI system [115].This can include requirements for safeguarding fundamental human rights, such as health and safety, as well as environmental and sustainability considerations. AI systems consume substantial amounts of energy, both for data storage and for training and operating large AI models. As of January 1st, 2024, an amendment to the Regulation on Public Procurement requires that environmental and climate considerations must be weighted at a minimum of thirty percent in all procurements, including those of AI systems, where environmental impact is relevant. All AI systems should contribute to fulfilling the United Nations Sustainable Development Goals.

Under the AI Act, high-risk AI systems will also be required to be designed in a way that ensures transparency and human oversight. The organization may request that the supplier describe the training and testing methods used. Requirements for quality management systems and logging, as outlined in the AI Act, may also be included. For medical devices, monitoring is a mandatory supplier obligation in accordance with the MDR.

Functional requirements

Users of the AI system should define functional requirements, for example, how the user interface should work, and how the AI system’s output should be presented to the end user. It may also be necessary to require explainability, interpretability, and transparency for those who will be using the AI system.

Data quality is a particularly important consideration in AI procurements, as the data used to train AI models directly affects system performance, reliability, and fairness Requirements should be set for the supplier to explain which data [116] were used during development, both for training and validation, and on what legal basis the data were processed under the GDPR. The AI Act includes data quality requirements. Article 10 sets out obligations for data and data handling in high-risk AI systems. Training, validation, and testing data must meet quality criteria, they must be relevant, sufficiently representative, and, as far as possible, free of errors and complete in relation to the intended use [117].

There should also be requirements for documentation of the AI system’s performance and quality in actual use, for example, how clinical evaluation or testing has been conducted, on whom, and with what equipment.

Technical requirements

The AI system must be compatible with the organization's infrastructure. Requirements should therefore be set for descriptions of data flow and data formats that will be processed and exchanged between the AI system and other systems. Requirements may also include the scalability of the AI system in terms of number of users or usage volume, as well as any need for test environments. Reference is often made here to the organization’s internal standard documents, or to equivalent international or Norwegian standards and technical certifications.

AI models can be deployed either locally on the organization’s own servers or in the cloud. When using cloud solutions, it is important to define requirements in accordance with current national recommendations.

Safety requirements

Most organizations in the health and care sector are required to comply with the The Code of Conduct for information security and data protection (Normen). The measures described in the Code of Conduct, which are important for securing digital systems in general, are also important in the protection of AI systems.

The organization must require that the supplier has considered the special challenges and threats associated with AI systems. The Code of Conduct has published an article on security in AI systems and MITRE ATLAS™ provides an overview of known threat scenarios targeting AI systems.

The European Union Agency for Cybersecurity (ENISA) identifies the following threats in the AI lifecycle

  • Evasion: AI systems can often be tricked in new ways, for example by the attacker using special input values (adversarial examples, prompt injections)
  • Poisoning: Attacker influences training data or the AI model so that the behavior of the system is affected in a chosen direction.
  • Information leakage: An attacker can extract information about the AI model, such as configuration, parameters and training
  • Compromised components in the AI system: Attacker compromises a component, for example by exploiting vulnerabilities in open-source libraries used by the AI system.
  • AI system failure: The entire AI system fails, for example by an error or by an attacker managing to take down the AI system

 

 

 

[116] How data is selected, the origin of the data and the processes when the data originated. What equipment has been used to collect data. For example, which camera equipment has been used in the development of an AI model for image interpretation. How the data is labelled, whether it is correct, complete and appropriate for the purpose for which it is to be used.

Last update: 23. mai 2025