Responsible Innovation and Social Knowledge in Artificial Intelligence (RISK-AI)

The Nordic and Baltic countries form a highly integrated region in Europe, leading in the digitalization of healthcare. Despite being highly digitalized, the healthcare systems in the region encounter share common social and health challenges, for example, aging populations.  

Artificial intelligence (AI) holds promise for supporting clinical decision-making and practice, which may lead to more effective management of resources, releasing pressure on the healthcare system. However, it also entails an ensemble of heterogeneous risks (e.g., clinical, social, organizational) that may be obscure to the key stakeholders in this sector.   

The European Commission proposed a horizontal regulation, the AI Act, as a risk-based approach for regulating AI technologies. The AI Act entered into force in August of 2024, giving organizations 36 months to reach compliance. In parallel, governments are developing and implementing strategies and governance models to facilitate integration of AI into healthcare, aligning with the EU regulations on AI.   

Within this dynamic context, the RISK-AI project is guided by a main research question: how can organizations better identify, understand, and manage AI risks in healthcare in ways that enhance trustworthiness and support responsible use of AI?  

The work will address the urgent need to shape knowledge and capacity on the shared societal, technical, and political challenges that healthcare organizations face to comply with the AI Act, and the need to meet the societal expectations for a responsible use of AI. The ambition is to encapsulate these two pressures through the conceptualization of ‘Responsible Risk’. This new concept will incorporate the diverse risk components that are introduced by AI and the known best practices to assess them, accounting for the need to tailor the risk assessment to the organisation and to the level of maturity of the considered AI tool.   

Two clinical use cases, one in Norway and one in Latvia, are planned for piloting the concept of Responsible Risk. The pilot will consist of a validated instrument (MAS-AI) to develop actionable insight into the tolerances for risk-taking. Furthermore, the project will promote AI literacy, by training healthcare personnel and other stakeholders in healthcare to understand, assess, and alleviate the heterogeneous risks that are intrinsic of AI in medicine.  

Contacts

Bodil Aurstad. Photo: NordForsk

Bodil Aurstad

Special Adviser
Profile picture Mathias Hamberg

Mathias Hamberg

Special Adviser

Sign up to our newsletter

Get our newsletter to receive news and updates on NordForsk research areas and projects.

Sign up here